Opinion & Analysis

CDOs, If You Remember One Rule for AI Ethics — Make It This One

avatar

Written by: Ali Khan | Former SVP, Chief Data Officer at Experian Consumer Services

Updated 2:00 PM UTC, Wed July 2, 2025

post detail image

The use of AI in the enterprise has moved from aspiration to reality in the last few years. I am specifically referring to the deployment of Large Language Models (LLMs) which have become synonymous with AI in common parlance. Although LLMs represent only one facet of AI, specifically of machine learning which has been widely used for decades, no one can deny the growing capability and utility of these constructs across a wide array of use cases to augment or replace human endeavors – from the creative to the mundane.

The ubiquity of AI across the enterprise and the world would seem to be inevitable. In 2024, U.S. companies invested over $100 million in AI, and Nobel prizes for Physics and Chemistry were awarded to AI pioneers. While there are nation-state-scale initiatives underway, the barriers to the adoption of AI for ordinary people are dropping, and millions of us now use LLMs daily in our lives and work.

AI is here – But are we ethically ready?

While we consider the legal, practical, and broader societal impact of AI, I would submit the ethical challenges of AI is the single most important conversation in our industry today. The accelerating impact of AI in our work and our lives would seem to far outstrip our ability to grapple with the ethical consequences and potential pitfalls of this promising and potentially transformative technological wave.

However, it is not as if AI ethics isn’t being widely discussed. Since 2014, there has been an explosion of research interest in this area, with over 1,000 papers written last year alone, and the formation of several organizations and conferences devoted to the topic. There is a broad recognition that as AI agents become more autonomous there is a real risk of divergence from what we consider good behaviour.

As Jack Clark, Co-founder of leading AI safety firm Anthropic, ominously but aptly states: “The bigger and more capable an AI system is, the more likely it is to produce outputs that are out of line with our human values.”

Concerns about the fairness and transparency of machine learning are not new. There are well-established methods and toolsets to detect bias on the basis of race, gender, and other factors in areas as diverse as financial services and healthcare – although they are not always applied as diligently and consistently as customers deserve.

We find ourselves now in a moment of heightened risk. AI capability and adoption are accelerating so quickly that it is challenging enough to comply with legal and regulatory requirements, let alone consider ethical implications. Yet, that is exactly what we must do if we hope to augment intelligence, rather than amplify bias.

Rethinking data: It’s not just information – It’s Identity

So, where do we begin? Machine learning systems, including LLMs, are software that learn from data. Aberrant AI behavior is largely a result of training against datasets that contain human biases, and so, we begin with data.

But what is data – in particular data about ourselves? Should we think of it as property, the same way we think of our physical possessions, or more ephemerally, as information that can still be potentially sensitive? This is a tricky question – data feels like property but is handled like information.

The academic literature seems fairly split on this issue but I would posit that given the centrality of data in our most critical interactions – health, finance, work, social, and so on – we treat data neither as normal property nor merely information – but rather as its own class of special property which represents the digital imprint of a conscious being.

Rudolph von Jhering, a 19th-century jurist states the following about the nature of property (it is even more applicable to personal data): “In making the object my own I stamped it with the mark of my own person; whoever attacks it attacks me; the blow struck it strikes me, for I am present in it. Property is but the periphery of my person extended to things.”

The ethical core: Trust, Fairness, and Accountability

Now that we’ve established what data is, let’s discuss the ethical considerations around it. We should not consider this discussion an extension of data management but rather the extension of moral philosophy to the domain of data. This framing is important because while implementing data management best practices and adhering to compliance mandates are essential, they do not go far enough, considering how central data has become to our lives.

As with any moral consideration, when it comes to data – we must do the right thing, not only because it is good for our customers or our business (although it is) but simply because it is the right thing. I would submit that this is achieved by maximizing three foundational attributes: Trust, Fairness, and Accountability. If these are approached and implemented correctly, it would result in a virtuous circle – when each of these acts will reinforce further acts in the circle.

Story Image
Fig 1: The virtuous circle of Trust, Fairness, and Accountability

Creating an operating framework around these foundational attributes means adopting a proactive, ethics-first approach, which is not merely reactive to legal or compliance challenges. We start and end with Trust.

Trust starts with transparency and user control

Enterprises are often in a position of privilege extended by customers who share their precious data because they see them as a trusted repository.

First and foremost, this means securing data from both external and internal threats and ensuring privacy is protected. Second, it means fostering transparency. This includes ensuring that the customer retains ownership of their data – having the ability to furnish or delete their data as they see fit and retaining control over how their data is leveraged across various use cases at all times – with explicit, specific, and revocable rights over their data.

Implementations of Open Finance must be built on a foundation of customer-centric data ownership. Transparency must exist across both data and its use and interpretability of model outputs that use that data. The latter remains a challenge given the existence of “black box” models which are essentially uninterpretable by humans, in which case the emphasis will shift to utilizing well-established tools and methods to detect biased outputs.

Deepen trust by ensuring fairness

We deepen trust by ensuring fairness. First, this means considering and accounting for all factors that may contribute to unfair outcomes. It begins with the complement of the data team members themselves – for example, a more diverse team ought to be less likely to centralize and amplify overlapping biases in dataset selection. Analyses of bias need to be detected and acted upon as part of the development and operating cycle, and interpretability of models means understandable by a human.

Secondly, there exists the need for human oversight. Humans are and ought to remain the chief arbiters of good and bad behavior on the part of AIs – in particular, judgments around “the greater good” – which may not be immediately obvious. Given the scope and scale of AI deployment, it is going to be increasingly important to augment human oversight by leveraging approaches like Anthropic’s Constitutional AI which uses AI agents to monitor the behaviors of AI systems against a “constitution” of desirable behaviors in line with human values.

Maintain Fairness, and therefore Trust, by enabling Accountability

We maintain Fairness, and therefore Trust, by enabling Accountability. First, this means appointing an interdisciplinary role of the AI Ethicist focused on ensuring desirable outcomes from AI systems, monitoring and countering adverse outcomes, and continually advancing the practice of ethically designed AI products.

Secondly, it means that product owners, who are responsible for product quality, safety, profitability, and overall success, are the logical choice to be accountable for the implementation of AI ethics in their products. The ultimate responsibility must rest at the board level, in line with security, risk, and other critical threats to customer and corporate wellbeing.

Lead with the Data Golden Rule

How do we actually implement these ethical principles for AI systems in practice? There is a great need for researchers, academics, and practitioners who are active in the relevant technological and philosophical fields to determine which subsets of human ethics apply to AI systems, and if there are new and different considerations for autonomous agents.

Taking direct inspiration from Asimov, there is also a need for Laws of Agentics, governing and standardizing ethical decision-making between agents created by different companies, ensuring reasonable moral coherence across multi-agent interactions. Ultimately, there will emerge a new paradigm for ethical AI decision-making inspired by existing ethical systems (real and fictional) but will be uniquely suited to the capabilities and limitations of AI systems.

In the meantime, how do we as data and AI leaders navigate the unfolding AI saga – in this great in-between time – filled with promise and peril?

I first invite my peers to ensure that we practice what we preach – that our own data and AI organizations are built (or re-built) around ethics first and foremost, and with our principles clearly stated in a charter. Secondly, we need to raise the profile of data and AI ethics in a broader societal context, beyond the corporate world or academia. These are amongst the most critical issues of our time with the potential for profound social impact.

As builders of these systems, we are the best placed to lead and moderate discussions on the trade-off between aspiration and practicality when it comes to implementing well-behaved AIs.

Finally, I recommend that when – as will inevitably occur – we do not yet have the full understanding of the ethical implications of a system design decision, we implement “The Data Golden Rule” – which means that we ensure our customers’ data is treated and acted upon by AI systems in the same way that we would want our data to be handled.

While we may for some time debate the most appropriate ethical system for AIs, this golden rule which calls us to be empathic and to do unto others as we would have them do unto us, will serve us well in the realm of AI, just as it has across myriad human societies for millennia.

About the Author:

Ali Khan is the former SVP, Chief Data Officer at Experian Consumer Services, where he led the company’s consumer data platform and strategy. At Experian, he built and scaled data platforms and analytics products designed to promote financial health and inclusion for millions of consumers, leading a multidisciplinary team of analysts, architects, engineers, scientists, and product owners.

With over 25 years of experience in data management, primarily in financial services, Khan is a seasoned data leader known for turning data into enterprise-wide value. Prior to Experian, he served as Head of Data and Analytic Platforms at Verizon, where he developed the firm’s Customer Data Hub and a governance platform that supported compliance with the California Consumer Privacy Act (CCPA). His work enabled machine learning-driven insights and actions for more than 100 million customers.

Earlier in his career, Khan held senior data leadership roles at Scholastic and Bank of America. At Verizon, he was also a Squad Leader for the Women of the World (WOW) initiative, mentoring emerging female leaders in technology and data.

Related Stories

July 16, 2025  |  In Person

Boston Leadership Dinner

Glass House

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About