Opinion & Analysis

Guardrails for Greatness — 2 Pillars of AI Governance for Scalable Innovation

avatar

Written by: Apurva Wadodkar | Senior Manager of Enterprise Data Management COO-ESE-Data Engineering, Insights and Governance

Updated 3:25 PM UTC, Mon May 12, 2025

post detail image

AI Governance rarely exists in isolation — it thrives on a strong data governance foundation. From my experience implementing AI strategies multiple times, it has consistently been a natural extension of a well-defined, robust data strategy.

I like to approach innovation in steps. Carve out research capacity on the team and incubate projects leveraging promising technologies. Once a technology proves its potential, hit the road, engaging with business partners to gauge interest and uncover real-world opportunities for adoption. Years ago, I did this with predictive modeling, and more recently, I have been doing the same with Generative AI (GenAI).

Data governance is a well-established subject, so I will assume you already have a strong data strategy in place. In this article, we will focus on the next step-building and refining your AI governance standards.

Story Image

AI Governance Council

An effective AI governance program is built on four key pillars, each requiring dedicated expertise. To oversee these, you establish an AI Council. Before diving into its responsibilities, let’s clarify what the AI Governance Council is not.

The AI Council is not responsible for brainstorming AI use cases. That’s the role of data science teams (Product Manager-Engineering pods). This ensures AI adoption remains democratized, accelerating innovation and experimentation.

So, what is the AI Governance Council’s role? It serves as a guiding body, setting guardrails to enable safe and responsible innovation. It maintains a central repository where all AI programs (both built and purchased) are registered. This serves as the first step in engaging the AI Council and initiating the review process across the four key pillars of governance: Legal, Security, Privacy, and Architecture.

  1. Legal: As AI models and vendors emerge, clear contractual language on IP is essential. While specifics may vary, at a minimum, ensure your company data is protected from the vendor. Work with your Legal Business Partner to make this IP clause a permanent part of your contract template.
  2. Security: The Security pillar ensures your architecture is secure and that data used to train models is protected. Depending on your company, this could range from an on-premise policy to a FedRAMP or private cloud policy. It should align with the same provisions as your data governance.
  3. Privacy: As a best practice, prevent the use of personally identifiable data (PII) for training AI models. Create a privacy checklist to ensure training data can’t be traced back to individuals or entities. If PII is necessary, establish a manual approval process, ensuring that development teams have a legitimate and well-justified reason for its use in AI training.
  4. Architecture: The Architecture pillar ensures a standardized set of tools recommended by the architecture team. It’s vital to regularly test new technologies to keep standards aligned with advancements. This pillar also oversees the commissioning of essential AI tools, such as Sagemaker, Azure subscriptions, AI services, vector databases, and monitoring tools.

Data:

While I previously mentioned that data governance would not be the focus of this article, there is one critical point worth addressing. You need a dedicated checklist for what data can — and cannot — be used for AI. We have already discussed the use of PII data. Depending on your industry, consider adding other sensitive data types that require special approval before being used in AI training, such as finance data, healthcare data, or proprietary business information.

Data science teams

Not all AI governance aspects need to be centralized. Data science teams play a crucial role. Each development pod must take responsibility for addressing and brainstorming the following governance aspects for their specific use case:

  • Fairness: Does the training data introduce any bias?
  • Explainability: How will you explain the model to stakeholders to build trust?
  • Maintainability: How frequently will you re-train the model?
  • Metrics: What is the business appetite for metrics such as Accuracy, Recall, Precision, BERT Score, etc.?
  • Monitoring: Which operations teams will be alerted, and what’s the process for addressing production issues?

Conclusion

A strong AI governance framework rests on the capable shoulders of the centralized AI Governance Council and empowered data science teams. By aligning these two forces, organizations can foster innovation, mitigate risks, and drive significant business impact.

Related Stories

July 16, 2025  |  In Person

Boston Leadership Dinner

Glass House

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About