US Federal News Bureau
Written by: CDO Magazine Bureau
Updated 2:46 PM UTC, January 14, 2026

The U.S. National Institute of Standards and Technology (NIST) is inviting industry feedback on how organizations are evaluating the secure development and deployment of artificial intelligence agents.
The agency is seeking input on a range of issues, including emerging security threats, technical controls, assessment and testing methods, safeguards for deployment, and priority areas for future research.
“We encourage respondents to provide concrete examples, best practices, case studies, and actionable recommendations based on their experience developing and deploying AI agent systems and managing and anticipating their attendant risks,” NIST said in a Request for Information (RFI) posted on the Federal Register.
According to the institute, the responses would inform the efforts of Center for AI Standards and Innovation (CAISI) to evaluate security risks associated with different AI capabilities, assess vulnerabilities in AI systems, and develop evaluation metrics and assessment methods.
Housed within NIST, CAISI was created to serve as the federal government’s primary interface with industry on the evaluation and security of commercial AI, with particular attention to capabilities that could pose national security risks.
NIST added feedback may also support the creation of technical guidelines and best practices to measure and strengthen AI system security, along with other initiatives focused on securing AI agent systems.