Opinion & Analysis
Written by: Todd Henley | Board Member of AIFAlliance.org
Updated 2:00 PM UTC, Tue October 21, 2025

Search engines were wonderful inventions, and we can all be excused for the illusion that all the world’s information was at our fingertips when we first started using them. After all, I’m sure I’m not the only one who began the day searching for vital information on transfer rates for the latest network storage protocol, only to find myself at 2:00 AM with Ph.D.-level knowledge of Ornithology.
At best, following the linked documents helped us to piece information together that others had pioneered. At worst, search engines led us down multiple rabbit holes, becoming unintentional “experts” on unrelated topics.
The linking of AI to search engine output has been of great help in summarizing query responses. These summaries have led to fewer rabbit hole excursions but often have their own limitations. As a convenient example, if you were to type a simple query into your favorite search engine, such as “Best Practices for Implementing Commonly Available AI Governance Frameworks,” the result would be less than satisfying.
Responses to the above search query include “Establishing clear policies and guidelines, engaging stakeholders, implementing continuous monitoring and auditing, training and awareness, prioritization of risk management, ensuring transparency and explainability, regulatory compliance, and model management.”
All these responses are great topics in and of themselves, but only amount to what are essentially components of program frameworks, not best practices regarding implementing governance program frameworks. AI-supported search engines return these examples because that’s what is available to the large language model used to power the summary.
AI can be forgiven for this kind of error propagation. After all, at some point, a human established these framework components as “best practices,” and that’s what has been used for every article generated by AI ever since.
This begs the question: If what is summarized in search returns are merely components of AI governance frameworks, then what exactly are best practices for framework implementation?
Just as data governance has had to contend with for many years, multiple definitions of AI governance exist. This article will not delve deeply into the many definitions, but will summarize that AI governance programs include policies, standards, guidelines, procedures, and practices designed to guide the responsible conceptualization, development, deployment, use, and deprecation of AI systems.
The goal of AI governance programs is to ensure AI systems are developed and used ethically, safely, and in alignment with organizational goals and values. AI governance programs also aim to mitigate potential risks that AI systems present, such as bias, privacy incidents, and security threats.
In essence, frameworks are structures or systems of principles that serve as a foundation for building something. In this instance, AI governance frameworks provide a structured approach to AI governance program implementation, guiding the conceptualization, development, deployment, maintenance, and use of AI systems. AI management and governance systems ensure safe, fair, and ethical use of AI while balancing associated risks and benefits by adding controls throughout the AI lifecycle.
Many frameworks for the responsible execution of AI systems exist:
While there are many frameworks to choose from, they all share common elements and may overlap in scope, terminology, and goals, though they may be implemented for different purposes, based on your organization’s AI governance program requirements. As an AI governance program implementer, let best practices guide you not only in selecting the correct framework, but in understanding how to take the best of each commonly available framework and make them your own.
Following best practices for AI governance program framework implementation is essential to ensure that AI systems are deployed responsibly, ethically, and in alignment with organizational goals. These practices provide a structured approach to managing the complexities and risks associated with AI and ensure the best fit-and-finish for your organizational needs.
By adhering to best practices, organizations can foster stakeholder trust, enhance operational resilience, and avoid legal or reputational fallout. Moreover, implementing a robust governance framework positions the organization to adapt more effectively to evolving technologies and regulations while also promoting innovation through safe and trustworthy AI use.
The following constitute what can be considered best practices for the selection and implementation of AI Governance program frameworks. This list isn’t all-inclusive, but should serve to start the discussion in your organization:
The wave of frameworks has become part of the broader hype associated with AI. Frameworks are released not necessarily out of necessity but often as a marketing gimmick to boost the sponsoring organization’s position as an industry thought leader. By introducing new and potentially overlapping terms and diagrams, framework developers attempt to position themselves as frontrunners in the AI revolution.
Unfortunately, frameworks don’t solve the hardest problems facing enterprises in AI adoption, as most organizations still struggle with fundamentals like maintaining clean, actionable data, or even knowing what data they have, where it resides, or how it’s being used.
Another unfortunate factor is that the disciplines representing the cornerstones of successful AI integration — such as Security, Privacy, and Data and Information Governance —are too often afterthoughts rather than core priorities.
Trying to do everything, everywhere, all at once, much like hope, is not a sound strategy. Obsessing over which AI governance framework you should implement, or which components must be implemented first, may not be what’s most needed by the organization. In short, a little less theory and a lot more practical guidance will help solve some of your organization’s most persistent problems.
The first best practice is understanding that you can’t do everything, everywhere, all at once. Like everything else in life, the road to AI maturity starts and ends with the basics: Start with well-defined, solid use cases, and build your framework development efforts from there.
This isn’t an existential question, but it is no less important. A best practice for selecting and implementing an AI governance framework is to take into consideration who you are in the corporate sense. A few things to evaluate are Culture, Change Management Capacity, and Collaborative Capacity.
Effective implementation relies on embedding shared decision-making and continuous learning. Collaborative capacity ensures smoother integration of ethical guidelines, risk assessments, and accountability mechanisms. Without it, AI initiatives may stall or proceed with blind spots that increase risk.
Organizations with mature collaborative processes are better equipped to tailor governance frameworks, scale them across business units, and ensure sustainability.
An organization’s industry and regulatory status significantly influence framework selection and implementation.
Highly regulated sectors such as healthcare, finance, energy, and aviation face strict legal obligations and may prefer frameworks emphasizing traceability, auditability, and compliance (e.g., NIST AI RMF or ISO/IEC 42001).
In contrast, less regulated sectors may adopt more flexible frameworks prioritizing innovation and ethics over rigid controls.
Regulatory obligations can dictate the depth and pace of implementation. For example, a financial institution must deeply integrate governance into risk management, often requiring audits and documentation. A technology start-up may prioritize agility, adopting lighter governance models while still adhering to ethical best practices.
Industry maturity also influences how frameworks are applied.
Established industries can customize frameworks into existing structures, while emerging industries may need adaptive approaches to account for evolving regulations.
Starting an AI governance program doesn’t mean beginning from scratch. In relatively mature organizations, many steps may already have been taken. Start with an inventory of existing or past functions. For instance, a Data Governance program — even if abandoned — may have governing bodies you can co-opt. The same may apply to Security or Privacy program elements.
When implementing a framework, consider immediate needs. Implementing an entire framework at once creates friction. A best practice is to start with core elements and expand outward.
Most frameworks share common elements such as Program Strategy, Communication, Roles and Responsibilities, and Change Management. Focusing on these provides structure and alignment with organizational needs. Starting solely with compliance resolves only one area and fails to address the entirety of governance needs.
After starting with core components, build outward by incorporating related frameworks such as Data Architecture, Data Governance, Data Protection and Privacy, Information Governance, and Security.
Think of these as layers of an onion, where each layer supports the others. For example, trustworthy data for AI hinges on robust Data and Information Governance programs.
Implementing best practices for AI governance frameworks ensures that organizations develop trustworthy, responsible, and sustainable AI capabilities. These practices emphasize tailoring frameworks to organizational culture, change management maturity, and collaborative capacity.
Strong cultural foundations and collaborative capacity enable smoother integration and foster buy-in across departments. Building on existing structures instead of starting from scratch provides scalability and minimizes disruption. Gradually layering in AI-specific practices creates sustainable governance frameworks aligned with both compliance requirements and organizational growth.
Ultimately, best practices ensure that AI governance frameworks are not only compliant and robust but also contextually relevant and adaptive to future needs.
About the Author:
Todd Henley is a performance-driven information and AI governance leader with over 20 years of experience designing and executing enterprise-class governance, risk, and compliance programs across highly regulated industries. As Founder and Principal of Paperkite.ai, he provides full-time-to-fractional leadership consulting and advisory services that help organizations — whether small, mid-tier, regulated, or open — develop and operationalize Information and AI governance solutions tailored to their unique needs.
Henley’s expertise spans frameworks, policies, risk and compliance assessments, and ethical AI practices, with a proven record of aligning governance strategies to business value while advancing responsible data and AI use. He also serves on the Board of Directors of the AI Freedom Alliance, contributes to the Global Editorial Board of CDO Magazine, and has held senior governance and privacy leadership roles in the banking, utilities, and nonprofit sectors.