Opinion & Analysis

Ethical AI Governance — Navigating the Path to Responsible Implementation

Why AI governance must start with ethics

Written by: Tina Salvage | Senior Data & AI Consultant at OminiaDigital

Updated 5:23 PM UTC, November 25, 2024

post detail image

We are talking about AI governance like it’s a tooling or compliance problem. It isn’t. It’s an ethics problem.

Responsible AI isn’t about controlling models. It’s about making deliberate choices and standing behind the consequences.

But the real risk isn’t whether the model works. It’s that we’re scaling decisions without fully understanding their impact. AI doesn’t just automate. It amplifies. And it can amplify things you don’t want too.

That shift changes what governance needs to do. Ethical AI governance is no longer about frameworks or checklists. It’s about accountability. It’s about who makes decisions, how those decisions are made, and who owns the outcome when they scale.

What are we actually risking with AI?

Most organizations will say bias, privacy, transparency and, of course, they’re not wrong. But the real issue is that AI creates distance between decision and responsibility.

Decisions are made faster, at scale, and often without clear visibility. And when something goes wrong, the question becomes very uncomfortable, very quickly: “who owns this?” That’s where governance either works or it doesn’t.

What I’m seeing with data privacy – how organizations are making mistakes

AI runs on data and, increasingly, that data is personal, inferred, or sensitive. The ethical challenge isn’t whether we can use it, it’s whether we should.

This is what I’m observing in practice:

  • Data is being reused far beyond its original purpose
  • Assumptions prevail that “anonymized” means safe
  • There’s very little visibility into what training data actually contains

In one case, customer transaction data originally collected for fulfillment and operational reporting was later repurposed to train a recommendation model. On paper, the data had been “anonymized,” with direct identifiers removed.

However, when combined with behavioral patterns, location data, and purchase history, individuals could still be re-identified with a high degree of confidence.

The issue wasn’t malicious use. It was an assumption. The data had quietly shifted from operational use to behavioral modelling without a clear reassessment of consent, purpose, or risk.

Because the model performed well commercially, the underlying privacy risk went largely unchallenged until it was raised during a governance review. By that point, the data was already embedded in the model training process, making remediation significantly more complex.

And yet, privacy is still often treated as a downstream compliance check. It shouldn’t be.

What ethical data use looks like in practice is simple, but not easy:

  • Define a clear purpose for data use
  • Give individuals control over what enters training datasets
  • Understand what can be inferred, not just what is collected
  • Ensure ongoing accountability, not one-off approval

Because once data is in a model, you don’t get to take it back.

AI bias: are we scaling better decisions or just faster ones?

Bias in AI isn’t new. I’ve seen this show up multiple times, and this is where AI ethics becomes real.

Problems happen not because anyone intended harm, but because no one owned the outcome.

In one case, a product prioritization model used for assortment and replenishment decisions consistently favored products with strong historical sales, prioritizing them for visibility, stock allocation, and promotional activity.

On the surface, this looked like good commercial optimization. But what sat underneath was more complex.

Historical sales were heavily influenced by availability, prior promotion, and regional bias in ranging decisions.

Products that had previously been understocked, newly introduced, or targeted at less dominant customer segments were systematically deprioritized.

The model wasn’t identifying the best products. It was reinforcing past decisions, working as designed.

The question was whether the outcome was fair and aligned with what the business actually intended.

Because performance metrics were based on sales uplift and efficiency, the bias wasn’t immediately visible. It only became clear when trading teams questioned why certain ranges never gained traction despite a strategic intent to diversify and expand.

Without that intervention, the model would have continued narrowing the assortment, reducing choice, and embedding a feedback loop where only already-successful products were given the opportunity to succeed.

This wasn’t a model failure. It was an ethical one. Because nobody owned the outcome.

What you need to do:

  • Test outputs to check they actually working to achieve your business intent, not just improving the performance metrics
  • Look at your historical data – does it actually reflect opportunity or just amplify constraint?
  • If you see a potential issue, introduce guardrails to protect new, diverse, or strategic product ranges
  • Ensure everyone in your business clearly knows who has ownership of decisions, not just model outputs

This doesn’t remove bias completely. But you do decide whether you’re going to optimize for what worked yesterday, or create space for what should work tomorrow.

AI accountability: Who is actually responsible?

AI strategies fall apart when something fails and no one clearly owns the outcome. Because accountability is often unclear. It sits somewhere between the business, data teams, technology, and vendors.

And in that gap, decisions get made, but ownership doesn’t. AI doesn’t remove responsibility. 

It exposes where it never existed properly in the first place. What needs to happen:

  • Clear ownership of outcomes not just systems
  • Defined decision rights
  • Governance that actually makes decisions, not just documents them
  • Transparency from vendors on how their AI works

If no one owns the outcome, ethical AI governance is just theatre.

Transparency: Could you explain this decision tomorrow?

A simple test I often use to assess transparency is to ask: if a regulator, customer, or journalist questioned a decision you made tomorrow, could you clearly explain why it was made? In many organizations, the honest answer is no. 

Not because the technology isn’t understood, but because the end-to-end view doesn’t exist:

  • Where the data came from
  • How the model was trained
  • Why a decision was made

Transparency isn’t about making everything interpretable. It’s about making it defensible. That means documenting models properly, understanding data lineage, being clear on where decisions come from.

Because “the model said so” is not an answer that holds up anymore. If you can’t explain why a decision was made, it’s an ethical failure.

Safety: What happens when AI goes wrong?

AI failure isn’t a hypothetical. It’s a certainty. The real question is whether you’ve designed for it.

Too often, I see organizations focusing on performance accuracy, speed, and optimization without asking, “What happens when this fails?” And more importantly, “How quickly can we detect it, stop it, and fix it?”

In one organization, an automated pricing model was deployed to optimize margins in near real-time. 

It performed strongly in stable conditions, adjusting prices based on demand signals, stock levels, and competitor activity.

However, during a period of unexpected supply disruption, the model continued to optimize based on incomplete and rapidly shifting data.

The result was unintended price spikes on essential items, triggering customer complaints and reputational damage. The issue wasn’t the model itself, but the lack of failure design.

There were no thresholds to detect abnormal behavior, no clear escalation route, and no mechanism to intervene quickly.

By the time the issue was identified and corrected manually, the impact had already reached customers. The model had done exactly what it was designed to do, just not what the business needed at that moment.

What a strong AI governance framework looks like in practice:

  • Understanding which use cases are high risk
  • Testing scenarios before deployment
  • Having clear escalation paths
  • Being able to switch things off when needed

If your only plan is that ‘it works’, you’re not owning what happens when it fails.

Are regulations solving this?

Regulation is catching up to how quickly AI is being deployed and the risks it introduces at scale, and that’s a good thing. We now have:

  • EU Artificial Intelligence Act (EU AI Act): A comprehensive, risk-based regulation classifying AI systems into prohibited, high-risk and limited-risk categories. Now entering phased implementation (2024–2026), with specific obligations for high-risk and GenAI systems.
  • UK AI Regulatory Framework: A principles-based, sector-led approach (rather than a single AI law), guided by regulators such as the Information Commissioner’s Office and Financial Conduct Authority. Focuses on safety, transparency, fairness, accountability, and contestability.
  • Canada’s Algorithmic Impact Assessment (AIA): Developed by the Treasury Board of Canada Secretariat, this tool assesses the risk level of automated decision systems and defines corresponding governance requirements.
  • Artificial Intelligence and Data Act (AIDA) Canada (proposed): Intended to regulate high-impact AI systems, with a focus on harm prevention, transparency, and accountability. Still evolving but influential in shaping global thinking.
  • U.S. National AI Initiative Act & AI Executive Orders: A combination of federal initiatives promoting AI innovation alongside increasing focus on safety, security, and responsible development, particularly following recent executive orders on AI risk management.
  • NIST AI Risk Management Framework (U.S.): A widely adopted voluntary framework providing guidance on identifying, assessing, and managing AI risks across the lifecycle.
  • Singapore FEAT Principles: Issued by the Monetary Authority of Singapore, focusing on Fairness, Ethics, Accountability, and Transparency in financial services, still one of the most practical industry-led approaches.
  • OECD AI Principles: Globally recognized principles promoting inclusive growth, human-centered values, transparency, robustness, and accountability, adopted by many countries as a foundation.

The common thread across all of these is clear: organizations are expected to understand, document, and justify how their AI systems make decisions and the risks they introduce.

This shifts governance from a compliance exercise to an operational requirement. The question is no longer whether controls exist, but whether they can stand up to scrutiny when something goes wrong.

Regulation, however, doesn’t solve the problem. It sets a baseline. With generative AI evolving rapidly, it will always lag behind reality. Organizations therefore have to decide: are we aiming to comply, or to lead?

So what does ethical AI governance look like in practice?

Ethical AI governance only works when it’s operational. Not a policy. Not a slide. Not a framework sitting in a document. In practice, it looks like:

  • AI integrated into your data governance, not separate from it
  • Clear classification of use cases based on risk
  • Controls across the lifecycle from data to model to decision
  • Continuous monitoring, not one-off validation

And importantly it includes everything:

  • Internally-built models
  • Vendor tools
  • Third-party data
  • Experimental use cases

Because risk doesn’t care where the AI came from.

Why trust is the outcome that matters

At the end of all of this privacy, bias, transparency, and safety the outcome is trust. And that is not built through statements or principles. It’s built through decisions.
As the World Economic Forum defines, responsible AI is about ensuring fair, transparent, and positive outcomes. But in reality, it comes down to something simpler.

Do people trust the decisions your organization is making with AI? If they don’t, it doesn’t matter how advanced the technology is.

Ethical AI governance is not a framework you implement. It’s a position you take.

Because the question is no longer Can we build this?” It’s, Should we use it and are we prepared to own the outcome when we do?”
About the Author:

Tina Salvage is a Senior Data & AI Consultant at OminiaDigital, specializing in data strategy, governance, and the responsible adoption of AI across complex organizations. She brings over a decade of experience across financial services and global enterprises, with deep expertise in data management, regulatory environments, and financial crime compliance.

Tina is known for translating data ambition into practical, scalable operating models. She has a strong track record of driving strategic transformation across business processes, systems, and organizational structures working closely with executive leadership, business stakeholders, and technology teams to embed lasting change.

Her focus is data, governance, and value for business outcomes. She is passionate about building data and AI foundations that not only meet regulatory expectations but enable organizations to operate more effectively, make better decisions, and unlock commercial opportunity.

Tina’s approach is rooted in people as much as process bringing clarity to roles, telling the right story to gain buy-in, and creating the conditions for teams to take ownership and thrive.

Related Stories

June 22, 2026  |  In Person

Chicago CDO AI Forum

Westin Chicago River North

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About