Singapore Publishes Model AI Governance Framework and AI Governance Playbook

The MGF-GenAI comprises nine key dimensions that include accountability, data quality, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research and development, and leveraging AI for the public good.
Singapore Publishes Model AI Governance Framework and AI Governance Playbook
Representative image by freepik.

Singapore has recently published the Model AI Governance Framework for Generative AI (MGF-AI). The framework is built upon the country’s previous work on traditional AI governance and aims to address the critical challenges posed by the advent of generative AI while enhancing innovation.

The governance framework has been developed by the AI Verify Foundation and the Infocom Media Development Authority (IMDA) and was launched at the World Economic Forum in Davos.

To foster a trusted generative AI ecosystem, the MGF-GenAI comprises nine key dimensions that include:

  1. Accountability — Accountability is a key consideration to incentivize players along the AI development chain to be responsible to end-users.

  2. Data — Data is a core element of model development. It significantly impacts the quality of the model output. 

  3. Trusted Development and Deployment — Model development, and the application deployment on top of it, are at the core of AI-driven innovation.

  4. Incident Reporting — Even with the most robust development processes and safeguards, no software we use today is completely foolproof. The same applies to AI.

  5. Testing and Assurance — For a trusted ecosystem, third-party testing and assurance play a complementary role. 

  6. Security — Generative AI introduces the potential for new threat vectors against the models themselves.

  7. Content Provenance — AI-generated content, because of the ease with which it can be created, can exacerbate misinformation. Transparency about where and how content is generated enables end-users to determine how to consume online content in an informed manner. 

  8. Safety and Alignment Research & Development (R&D) — The state of the science today for model safety does not fully cover all risks. Accelerated investment in R&D is required to improve model alignment with human intention and values.

  9. AI for Public Good — Responsible AI goes beyond risk mitigation. It is also about uplifting and empowering our people and businesses to thrive in an AI-enabled future.

In a parallel initiative, Singapore will collaborate with Rwanda to lead the development of a Digital Forum of Small States (Digital FOSS) AI Governance Playbook.

Created exclusively for small states, this playbook aims to address the challenges associated with the secure design, development, evaluation, and implementation of AI systems, taking into consideration the unique constraints that small states face.

Reportedly, Singapore will facilitate the consultation of small states on an outline of the playbook during the Digital FOSS Fellowship Programme. The feedback received from the Digital FOSS will play a key role in shaping the playbook and facilitating an inclusive global AI discourse. The playbook will be available by the end of 2024.

Related Stories

No stories found.
CDO Magazine