White House to Oversee AI Safety Tests

This move, slated for review by the White House AI Council, is in response to an executive order signed by President Joe Biden three months ago to manage the rapid growth of AI.
White House to Oversee AI Safety Tests

The U.S. government has decided to enforce a new regulatory requirement whereby developers of major AI systems will have to disclose safety test results to the government. This move, slated for review by the White House AI Council, is in response to an executive order signed by President Joe Biden three months ago to manage the rapid growth of AI.

While software companies have committed to specific categories for safety tests, the establishment of a common standard is still pending. To address this gap, the National Institute of Standards and Technology, as outlined in the October executive order, will develop a uniform framework for assessing safety.

One of the critical 90-day goals outlined in the executive order, operating under the Defense Production Act, is the mandate for AI companies to share crucial information, including safety test results, with the U.S. Department of Commerce.

Also Read
White House Discusses Concentration of Power in AI
White House to Oversee AI Safety Tests

In addition to regulatory measures, the Biden administration is actively exploring legislative initiatives and collaborating with other countries like the EU to establish comprehensive rules for managing AI technology.

“We know that AI has transformative effects and potential,” Ben Buchanan, the White House special advisor on AI reportedly said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”

Taking specific steps in this direction, the Department of Commerce has developed a draft rule targeting U.S. cloud companies providing servers to foreign AI developers. Simultaneously, nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, have conducted risk assessments related to the use of AI in critical national infrastructure, such as the electric grid.

In line with this effort, the government has increased the hiring of AI experts and data scientists within federal agencies to enhance its capacity to regulate and oversee the evolving AI landscape.

Recently, Tennessee Governor Bill Lee introduced the Ensuring Likeness Voice and Image Security Act (ELVIS Act), a state bill to safeguard artists and songwriters from AI deepfakes. Virginia Governor Glenn Youngkin also approved Executive Order 30 (EO30) which introduces new guidelines for implementing AI in education and establishing a comprehensive AI policy and information technology standards to protect data across state agencies.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech