US Federal News Bureau
Introduced in July 2023, the guidelines mandate that companies thoroughly test their AI systems for potential risks.
Written by: CDO Magazine Bureau
Updated 3:12 PM UTC, Thu August 1, 2024
Apple has recently endorsed President Joe Biden’s voluntary guidelines for responsible AI development, the White House said. Apple’s joining the AI safety agreement increases the number of participating companies to 16. Other companies that have endorsed the guidelines include Amazon, Google, Microsoft, Meta, OpenAI, NVIDIA, IBM, and Adobe, among others.
Introduced in July 2023, the guidelines mandate that companies thoroughly test their AI systems for potential risks such as discriminatory biases, security vulnerabilities, and national security threats. Additionally, companies are expected to openly share the results of these tests with government agencies, civil society organizations, and academic institutions.
Earlier this year, U.S. Secretary of Commerce Gina Raimondo announced the creation of the US AI Safety Institute Consortium (AISIC), which brings together AI creators and users, academics, government and industry researchers, and civil society organizations to promote the development and deployment of safe and trustworthy AI.
The consortium comprises over 200 member companies and organizations at the forefront of developing and utilizing cutting-edge AI systems and hardware.
This includes leading corporations, startups, civil society groups, and academic teams shaping the foundational understanding of AI’s societal transformation. Additionally, it encompasses representatives from professions deeply involved in AI usage today.
Notable companies in the 200 member list also include Apple, Amazon, and OpenAI among others. The consortium will operate within the U.S. AI Safety Institute (USAISI) and will play a role in advancing key initiatives outlined in President Biden’s Executive Order on AI.