AI Poses “Extinction Level” Threat, Needs Dedicated Agency: US Govt Authorized Report

The report identifies two primary categories of risk associated with AI development: "weaponization risk" and "loss of control."
AI Poses “Extinction Level” Threat, Needs Dedicated Agency: US Govt Authorized Report
Representative image by freepik.

According to a U.S. government-commissioned report by Gladstone AI, there is an alarming need to stop national security risks stemming from AI. It warns of a potential "extinction-level threat" if swift action is not taken. The report, compiled by a team of three authors over a year, involved consultations with a wide array of stakeholders, including government officials and experts from the AI industry.

Proposed policy actions

Highlighting concerns within cutting-edge AI labs regarding decision-making incentives, the report proposes a series of policy actions to address these risks. These recommendations, if enacted, could significantly disrupt the AI industry. The key proposals to prevent AI from going rogue include imposing limits on the amount of computing power permissible for training AI models, with oversight from a newly established federal agency.

Additionally, the report suggests requiring government approval for the deployment of advanced AI models and even considers the possibility of outlawing the publication of model inner workings.

The report's recommendations were developed in response to the rapid pace of AI development and its potential implications for national security. As governments around the world grapple with the question of how best to regulate AI, the report underscores the need for proactive measures to address these emerging risks.

Despite the urgency of the situation, implementing the report's recommendations may prove challenging as some experts question the feasibility of outlawing certain AI training practices, given current government policy. However, the authors of the report believe that “decisive action” is necessary to mitigate the potential risks posed by AI.

The report identifies two primary categories of risk associated with AI development: "weaponization risk" and "loss of control." The former refers to the potential for AI systems to be used in malicious ways, such as designing and executing catastrophic cyber attacks. The latter concerns the possibility that advanced AI systems may outmaneuver their creators, posing a threat to human safety.

Attachment
PDF
Gladstone AI Action Plan Executive Summary.pdf
Preview

How to mitigate risk?

Addressing these risks will require a multifaceted approach, according to the report. In addition to regulating the computing power used for AI model training, the report recommends tighter controls on AI chip manufacturing and increased funding for research into AI safety. These measures aim to ensure that the development of AI technology proceeds responsibly, balancing innovation with the need to safeguard against potential risks.

Despite the challenges ahead, the authors of the report are optimistic about the potential for meaningful change. Former Defense Department official Mark Beall, one of the report's co-authors, has since left the company to launch a political action committee (PAC) aimed at advocating for AI safety legislation. The PAC, called Americans for AI Safety, aims to make AI safety a key issue in the 2024 elections, with ambitious fundraising goals.

Ultimately, the report underscores the need for proactive measures to address the potential risks posed by AI. By taking decisive action now, policymakers can help ensure that the development of AI proceeds responsibly, minimizing the potential for harm.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech