The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has recently announced agreements with both Anthropic and OpenAI.
The agreements are aimed at enabling formal collaboration on AI safety research, testing, and evaluation. According to the news release by NIST, each company’s MoU has established a framework allowing the U.S. AI Safety Institute to receive access to major new models prior to and following their public release.
The agreements are designed to foster collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, Director of the U.S. AI Safety Institute. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
Additionally, NIST has stated that the U.S. AI Safety Institute intends to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute.