Techs’ AI pledge provides glimmer of hope for safeguards

At long last there is a glimmer of hope that artificial intelligence can be developed in a manner that generates public trust and ensures cybersecurity.

But it’s still just a glimmer.

Big Tech leaders have long acknowledged the societal risks of the technology. Friday’s announcement at the White House that seven leading U.S. artificial intelligence companies have pledged to build safeguards into their AI products marks a welcome first step toward addressing the dangers.

The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — now must follow through on their voluntary commitments by working with the Biden administration and Congress on binding guardrails that prevent the sort of debacles that have plagued social media companies.

The five most critical elements of the eight-part agreement include:

• Implementing watermarks or some other means of identifying AI-generated content.

• Independent testing of AI products for safety before they are released, with a primary focus on cybersecurity and biosecurity.

• Public reporting of systems’ safety risks and evidence of bias and discrimination.

• A commitment to develop and deploy advanced AI systems to help address society’s greatest challenges, including cancer prevention and mitigating climate change.

• An effort to prioritize research on the societal risks that AI systems can pose, including threats to privacy.

When push comes to shove, however, the voluntary nature of the agreement makes it unenforceable. The temptation will be great for tech firms to ignore the safeguards if they believe doing so will put them at competitive disadvantages, threatening their bottom lines.

The agreement also contains glaring problems and omissions.

For example, the high cost of complying with the agreement may give tech giants such as Meta, Google and Microsoft an unfair advantage over smaller companies with equally good or superior AI products.

The agreement fails to include a commitment that companies disclose data scraped from the internet to train their AI systems. Artists, writers and musicians have been up in arms over the ability of AI to appropriate their works and likenesses. They want ways to opt out of AI data grabs and protect their creative works.

Nor does the agreement include any provision aimed at preventing China and other global competitors from obtaining AI intelligence programs, especially those that present security risks.

Tech firms have shown no willingness to slow or pause the development of AI products while safeguards are developed. But if they’re going to move with such alacrity, they must do so responsibly, ensuring that the products of this transformative technology have been reasonably vetted to protect the public.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Web Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – webtimes.uk. The content will be deleted within 24 hours.

Leave a Comment