AI — Are You in an Ethically Justifiable Position?

AI — Are You in an Ethically Justifiable Position?

It all started with weak signals around AI.  

For a while, there were plenty of people talking about AI, and it was an interesting conversation, but it hadn’t quite yet taken the world by storm. With the launch of ChatGPT on 30 November 2022, the signals got stronger.

Depending on who you spoke to, the new tech was fun to use, or an incredible advancement, or it even heralded some sort of apocalypse. In other words, it was the latest “bubble.”

Next, high-profile figures like Elon Musk and Rishi Sunak were weighing in. Sunak held the UK’s first AI Safety Summit at Bletchley Park in November 2023, followed by a rather eyebrow-raising interview with Elon Musk. (For a more detailed treatment of this, take a look at my last article: AI — Miracle Innovation or Catastrophic Threat?)

The European Parliament has also become involved, having agreed that it needs to institute laws around the use of AI. Now the EU AI Act, ‘the world’s first comprehensive AI law’, is on the horizon. Then the world’s largest regulator sat up and took notice…

Apple and Disney recently attempted to exclude a shareholder proposal calling on them to increase disclosure around their use of AI. And the U.S. Securities and Exchange Commission (SEC) ruled against it. In its letter to Disney, the SEC declared:

“The Proposal requests that the Company prepare a transparency report that explains the Company’s use of artificial intelligence in its business operations and the board’s role in overseeing its usage, and sets forth any ethical guidelines that the Company has adopted regarding its use of artificial intelligence.”

Apple and Disney had claimed they were within their rights to avoid shareholder votes on their use of AI because this related to “ordinary business operations.”

And the SEC disagreed:

“We are unable to concur in your view that the Company may exclude the Proposal under Rule 14a-8(i)(7). In our view, the Proposal transcends ordinary business matters and does not seek to micromanage the Company.”

There is an increasing recognition that activity around AI represents a material risk to companies, and therefore shareholders have a right to know about it.

In the words of Brandon Rees, Deputy Director of the AFL-CIO’s office of investment, Apple and Disney “haven’t even begun to grapple with these ethical issues’ around AI.”

And they’re not alone. Many organizations are behind the curve.

So AI has quickly turned from an interesting subject into a very specific and tangible issue. And if even giants like Apple and Disney aren’t above accountability around AI, what does this mean for other organizations?

It means that the legitimate and ethical use of AI is set to become the biggest issue of 2024 for businesses. 

It may be starting with the SEC, but it will spread to other regulators and governments. If you’re a listed company, soon your shareholders will start asking about AI. Private equity shareholders will start thinking the same thing. It becomes a question of corporate governance. And, as a shareholder-driven concern, the question will extend to not-for-profit and government organizations.

The momentum is building. And accountability around AI is set to become pervasive across all organizations. 

AI = the new GDPR?  

Around AI, we’re witnessing the same snowball effect that we saw with the General Data Protection Regulation (GDPR)…

Some headline cases around data privacy, like those of the Facebook and Equifax data breaches, resulted in the European Parliament’s recognition that a unified approach to data privacy was needed. Quite rapidly, the existing data privacy acts around the world were deemed insufficient. And on 25 May 2018, the GDPR officially came into effect.  

The fines that firms stood to incur as a result of this regulation represented an alarming percentage of their turnover. And many firms have fallen foul of these fines. Best practices around AI will almost certainly be codified by EU law in a similar way. And fines will follow…

If you thought GDPR was a challenge, this is on a similar scale. This is going to be an issue for CEOs, Boards, non-execs, shareholders; in short, anybody who has a vested interest in the governance of an organization.

Let’s not forget investors. To investors, big risk could mean big losses, so they’re going to want proof that organizations have got it right. 

Accumulating AI risk debt and AI ethical debt  

The earlier you start addressing your activities around AI, the more you can reduce your risk and cut the cost of remediation. Because the more algorithms you build now, the more you’ll have to check (and potentially rectify) in a year’s time. This is especially alarming given the accelerating rate of AI development.  

So it’s imperative that you start properly managing your AI activities sooner rather than later, to prevent your technical debt from growing. By putting the problem off, you’re accumulating what I call AI risk debt and AI ethical debt.  

The good news is you can start to address this issue, with no regrets, and work carried out right away.

  • First, remediate. Investigate and remediate anything around AI that’s been done to date.  

  • Second, institute what I call ‘AI Ethics by Design’.

This is all about preventing the problem from getting worse and making sure you’re doing the right thing going forward.  

You need to get a framework in place to prepare for this. 

Action 1: Know and keep track of everyone in your organization who is developing AI.

Action 2: Start making sure you have governance in place around their activities.

Action 3: Establish your current AI catalog. You need to know what you’ve got, what have you developed, and what does it do?

Irrespective of the new regulatory spotlight on AI, these are things you should be doing anyway. If you’re not, you need to be.

This issue is too complex to completely convey in the space of a single article. But my team and I have seen it all before with GDPR. Many people believed there was a technical solution that would do it all. What there was instead was some very expensive investment in this area, which wasn’t justified.

Technically speaking, there’s no magic answer to AI. But there is a process you can undertake, which is rooted in proper management, proper oversight, and prevention of AI ethical debt accumulation.

In addressing your activities around AI, you need to consider how to bring your data governance team together with your legal, compliance, and risk teams.

It is advisable to introduce a role into your organization that carries responsibility for, and ownership of, AI ethics. In the same way that many organizations have a Data Privacy Officer, you’re going to need an AI Ethics Officer.

Are you in an ethically justifiable position?  

The ethical implications of AI aren’t simple. So most of your organization’s activity around AI won’t be an objective case of right and wrong answers.  

Nevertheless, it is critical that you have what I call an “ethically justifiable position” on AI. You need to be in a position that is pragmatic and defensible. There needs to be ethical justification for your use of artificial intelligence, which data you use, your consideration of bias, and your deployment of algorithms for decision-making. You need to have investigated and managed the risk, and openly communicated it to shareholders.

  • European Parliament, ‘EU AI Act: first regulation on artificial intelligence’, News: European Parliament (2023):
  • Financial Conduct Authority (FCA), ‘Financial watchdog fines Equifax Ltd £11 million for role in one of the largest cyber security breaches in history’, FCA: Press Releases (2023):
  • Heiligenstein, Michael X., ‘Facebook Data Breaches: Full Timeline Through 2023’, Firewall Times (2023):
  • Kerber, Ross, ‘US regulator denies Apple, Disney bids to skip votes on AI’, (2024):
  • Kleinman, Zoe, and Sean Seddon, ‘Elon Musk tells Rishi Sunak AI will put an end to work’, BBC News (2023):
  • Kundaliya, Dev, ‘ICO fines more than tripled this year’, Computing (2022):
  • Rossow, Andrew, ‘The Birth Of GDPR: What Is It And What You Need To Know’, Forbes (2018):
  • Rule 14a-8 Review Team, United States Securities and Exchange Commission, ‘Re: The Walt Disney Company (the “Company”) | Incoming letter dated November 22, 2023’, January 3, 2024: 
  • Verney, Paul, ‘SEC denies Apple and Disney’s bid to exclude pioneering AI proposal’, Responsible Investor (2024):

About the Author:

Simon Asplen-Taylor is the Founder and CEO of Data, Analytics, and AI advisory, DataTick. He is also a bestselling author, having written the Amazon #1 selling Data and Analytics Strategy for Business. A number of firms are using the book to guide their data capabilities.

Asplen-Taylor has been named ‘Most Influential London CEO in Data’ and ‘Europe’s Most Influential Data Leader’. He has led some of the largest data-driven transformations in Europe and served as CDO for several FTSE firms.

For over 30 years he has led the data capabilities at organizations such as Bupa, IBM, UBS, and Bank of America Merrill Lynch. Most recently, he led the data transformation of the Lloyd’s of London insurance market.

Asplen-Taylor has a respected record of transforming businesses using Data, Analytics, and AI while delivering significant upside. He also specializes in acting as an advisor and coach to Executives, Board, and Data Leaders.

Related Stories

No stories found.
CDO Magazine