The U.S. Office of Management and Budget (OMB) released a memorandum last month for the heads of executive departments and agencies, directing them to improve public services through the use of artificial intelligence (AI). The guidance also emphasizes protecting civil rights, civil liberties, and privacy while promoting human flourishing, economic competitiveness, and national security.
Here are five key takeaways from the memorandum:
1. Eliminating barriers to AI innovation
Federal agencies are encouraged to proactively adopt AI technologies that improve public services and operational efficiency. To facilitate this, agencies should:
- Develop comprehensive AI strategies within 180 days.
- Coordinate internally and across the federal government on data interoperability and standardization.
- Identify and share commonly used AI tools and resources.
- Agencies should adopt procurement practices that encourage competition to sustain a robust Federal AI marketplace, such as by preferencing interoperable AI products and services.
- Agencies are urged to recruit, develop, and retain AI talent to boost responsible innovation, upskill the workforce, and help employees apply AI in their roles.
2. Strengthening AI governance structures
To ensure effective oversight and coordination of AI initiatives, agencies are mandated to:
- Appoint Chief AI Officers responsible for managing AI use and promoting innovation within their agencies.
- Establish AI Governance Boards, chaired by senior officials, to oversee AI-related activities and ensure alignment with agency missions.
- Chief AI Officer Council: OMB will convene an interagency council within 90 days to coordinate federal AI use and advance AI Principles.
These governance structures are designed to facilitate accountability and strategic direction in AI deployment.
3. Enhancing transparency in AI use
Agencies are required to identify AI systems that significantly impact rights, safety or critical operations. For these high-impact applications, agencies must:
- Conduct thorough risk assessments and testing before deployment.
- Implement continuous monitoring to detect and mitigate potential issues.
- Agencies must ensure human oversight, intervention, and accountability suitable for high-impact use cases.
- Publicly release annual inventories of AI use cases, highlighting those that impact rights or safety.
- Agencies must offer a way for users to give feedback on AI use and incorporate it into decision-making.
- Release government-owned AI code, models, and data when it does not pose risks to the public or operations.
These measures aim to provide the public with insight into how AI is utilized within federal agencies.
4. Develop generative AI policy
Within 270 days, agencies must create a policy for safe, mission-aligned use of generative AI, with safeguards and oversight to manage risks. Moreover, all agencies, except the Department of Defense and the Intelligence Community, must update their AI use case inventory annually
5. Investing in AI workforce development
Recognizing the importance of skilled personnel in AI implementation, the memorandum outlines initiatives to bolster the federal AI workforce:
- Agencies should use government-wide AI training programs, like those from OMB and GSA, to strengthen staff skills in AI and related roles.
- Agencies should prioritize hiring individuals with proven experience in designing, deploying, and scaling AI systems.
- Issuing guidance on pay and leave flexibilities to attract and retain AI talent.
These efforts are designed to ensure that agencies have the necessary expertise to manage and innovate with AI technologies.