Home » Top AI companies visit the White House and make ‘voluntary’ safety commitments

Top AI companies visit the White House and make ‘voluntary’ safety commitments

by abdullahxheikh334
Top AI companies visit the White House and make 'voluntary' safety commitments

While any AI legislation may not be passed for years, the industry is growing quickly and many people, including the White House, are concerned that it may become overly optimistic. Therefore, in advance of a proposed executive order, the Biden administration has gathered “voluntary commitments” from seven of the largest AI developers to work towards common safety and transparency goals.

The businesses involved in this informal agreement—OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon—will send officials to the White House today to meet with Vice President Biden. To be clear, no rule or enforcement is being suggested here; the agreed-upon practices are entirely optional. However, if a business avoids a few, no government agency will hold it accountable, and it will probably be public knowledge.

Here is a list of those who attended the White House event:

  • Brad Smith, President, Microsoft
  • Kent Walker, President, Google
  • Dario Amodei, CEO, Anthropic
  • Mustafa Suleyman, CEO, Inflection AI
  • Nick Clegg, President, Meta
  • Greg Brockman, President, OpenAI
  • Adam Selipsky, CEO, Amazon Web Services

No billionaires, but also no slaves. Also, no women.

The following commitments have been made by the seven companies (and perhaps others who didn’t receive the red-carpet treatment but will want to tag along):

  • Before their release, AI systems undergo internal and external security checks, including aggressive “red teaming” by independent specialists.
  • Information on AI risks and mitigation strategies (such as preventing “jailbreaking”) should be shared among the government, academia, and “civil society”.
  • To safeguard private model data like weights, make investments in cybersecurity and “insider threat safeguards”. This is crucial not only to safeguard intellectual property but also because an opportunity for bad actors could arise from an early widespread release.
  • Help third parties find and report vulnerabilities, for example, through a bug bounty program or a domain expert analysis.
  • Create reliable watermarking or another method of marking content produced by AI.
  • The “capabilities, limitations, and areas of appropriate and inappropriate use” of AI systems should be reported. You’ll have a hard time getting an honest response to this.
  • Give studies on societal dangers like systematic bias or privacy concerns a top priority.
  • AI should be developed and put into use “to help address society’s greatest challenges” like cancer prevention and climate change. (However, it was mentioned in a press call that the carbon impact of AI models was not being monitored.)

Explanation:

Although the actions mentioned above are voluntary, it is easy to believe that the fear of an executive order—which they are “currently developing”—is present to urge compliance. If some businesses refuse to permit outside security testing of their models before release, for instance, the EO may create a clause instructing the FTC to carefully examine AI products that make claims of robust security. (One EO is already in effect requesting agencies to be vigilant about bias in the creation and application of AI.)

Given that social media’s disruptive potential partially caught the White House off guard, it is obvious that they are keen to seize the initiative in this next big wave of technology. Both the president and vice president have met with business executives to get their input on a national AI strategy and have committed a sizeable amount of funds to new AI research centers and initiatives. Of course, as this extremely thorough (albeit necessarily slightly dated) research problems and opportunities report from the DOE and National Labs demonstrates, the national scientific and research infrastructure is far ahead of them.

Related Articles

Leave a Comment