US senator open letter requires AI safety at ‘forefront’ of growth

0
91

[ad_1]

Be part of prime executives in San Francisco on July 11-12, to listen to how leaders are integrating and optimizing AI investments for achievement. Study Extra

Right now, Sen. Mark Warner (D-VA), chairman of the Senate Intelligence Committee, despatched a collection of open letters to the CEOs of AI firms, together with OpenAI, Google, Meta, Microsoft and Anthropic, calling on them to place safety on the “forefront” of AI growth.

“I write at the moment relating to the necessity to prioritize safety within the design and growth of synthetic intelligence (AI) programs. As firms like yours make speedy developments in AI, we should acknowledge the safety dangers inherent on this know-how and guarantee AI growth and adoption proceeds in a accountable and safe manner,” Warner wrote in every letter. 

Extra broadly, the open letters articulate legislators’ rising considerations over the safety dangers launched by generative AI.   

Safety in focus

This comes simply weeks after NSA cybersecurity director Rob Joyce warned that ChatGPT will make hackers that use AI “far more efficient,” and simply over a month after the U.S. Chamber of Commerce referred to as for regulation of AI know-how to mitigate the “nationwide safety implications” of those options. 

Occasion
Remodel 2023

Be part of us in San Francisco on July 11-12, the place prime executives will share how they’ve built-in and optimized AI investments for achievement and averted frequent pitfalls.
 

Register Now

The highest AI-specific points Warner cited within the letter have been integrity of the information provide chain (making certain the origin, high quality and accuracy of enter knowledge), tampering with coaching knowledge (aka data-poisoning assaults), and adversarial examples (the place customers enter inputs to fashions that deliberately trigger them to make errors). 

Warner additionally referred to as for AI firms to extend transparency over the safety controls applied inside their environments, requesting an outline of how every group approaches safety, how programs are monitored and audited, and what safety requirements they’re adhering to, corresponding to NIST’s AI danger administration framework.

VentureBeat’s mission is to be a digital city sq. for technical decision-makers to achieve data about transformative enterprise know-how and transact. Uncover our Briefings.

[ad_2]