AI News Bureau
Written by: CDO Magazine Bureau
Updated 4:58 PM UTC, Fri February 14, 2025
Meta CEO Mark Zuckerberg has long championed the idea of making artificial general intelligence (AGI) openly available. However, a newly published policy document, the Frontier AI Framework, outlines scenarios where Meta may withhold the release of powerful AI systems due to safety concerns.
The framework distinguishes between “high-risk” and “critical-risk” AI systems, both of which Meta considers too dangerous for open release.
According to Meta, high-risk systems could aid in cybersecurity breaches or biological threats, potentially making such attacks easier to execute, though not with absolute reliability. Critical-risk systems, on the other hand, pose the threat of catastrophic outcomes that cannot be mitigated within their intended deployment contexts.
Examples of potential risks cited in the document include the “automated end-to-end compromise of a best-practice-protected corporate-scale environment” and the “proliferation of high-impact biological weapons.” Meta acknowledges that this list is not exhaustive but represents the most urgent and plausible risks tied to advanced AI deployment.
For systems deemed high-risk, Meta plans to restrict internal access and delay public release until effective mitigations reduce the risk to moderate levels. Critical-risk systems will face even stricter controls, including security measures to prevent data exfiltration and a halt in development until the system can be rendered less hazardous.
The Frontier AI Framework reflects Meta’s evolving stance amid growing scrutiny of its open AI development strategy, particularly ahead of the France AI Action Summit this month.
Meta’s framework emphasizes a balance between technological benefits and societal risks. “We believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves its benefits while maintaining an appropriate level of risk,” Meta states in the document.