Branded Content
Written by: Barr Moses | CEO & Co-founder, Monte Carlo, Oren Yunger | Managing Partner, Notable Capital
Updated 2:00 PM UTC, April 13, 2026

Between the two of us, we talk to dozens of Chief Data Officers (CDOs) every week. And lately, we keep hearing the same uncomfortable question. CDOs are asking themselves: “What exactly is my job now?”
It’s a fair question. And it deserves a serious answer.
The CDO role was purpose-built for a specific moment in enterprise history. The chief data officer responsibilities that defined the role five years ago (building the tech stack, centralizing governance, etc.) are no longer enough. That moment may be over. But the role doesn’t have to follow it, if CDOs are willing to make a genuinely hard pivot.
This piece is our attempt to explain what happened, where things stand, and what the CDOs who are winning are doing differently.
To understand what’s happening to CDOs today, you have to understand the job they were originally hired to do.
In the mid-2010s, enterprises were sitting on enormous amounts of data they couldn’t use. The problem wasn’t insight, but actually infrastructure. Data lived in silos, in legacy systems, in formats no one could query. The boardroom wanted analytics. The business wanted dashboards. And someone needed to build the stack that made all of it possible.
The classic CDO job description centered on a few things: Modernize the infrastructure, move to the cloud and adopt Snowflake, dbt, Fivetran and other classic modern data tech stack tools. Then build or grow the data team, centralize the governance, and deliver trusted data to the line of business. This is an oversimplification, of course, but this generalized playbook was adopted and used by countless leaders.
It was a huge, real, important job. And many CDOs did it exceptionally well.
Then ChatGPT launched.
Generative AI didn’t arrive through the CDO. It arrived bottom up through Engineering and top down through the boardroom. Executives demanded experimentation. Engineers started building. AI-native workflows began proliferating across the organization, largely bypassing the data infrastructure CDOs had spent years building.
This is the painful irony. The modern data stack, the very thing CDOs championed, is now being bypassed. LLMs don’t always run through your Snowflake warehouse. Agents don’t always respect your dbt pipelines. The analytics layer CDOs optimized for is no longer where the action is. The infrastructure AI actually runs on is owned by engineering teams, not data teams: vector databases, model registries, inference endpoints, agent frameworks. CDOs weren’t invited to design it, and most aren’t monitoring it.
We set out to measure just how far this shift has gone. The Monte Carlo 2026 State of AI Reliability report surveyed over 865 data and AI leaders: only 13.4% say AI development is primarily owned by the data team. The plurality says ownership is distributed across different teams or sits with product and engineering, with data as a shared service. In other words, the CDO’s organization is a vendor to the AI builders, not the lead.

And the infrastructure gap is real. Per the same report, 62.2% of leaders say their observability approaches for data and AI systems are either separate with limited integration, largely unmonitored, or nonexistent entirely. Only 18.9% have a unified observability approach across both. The gap is a leadership vacuum, not a tooling problem. Without unified data and AI observability, organizations have no reliable way to monitor AI outputs, measure model performance, or catch failures before they compound, leaving executives making high-stakes AI decisions without the data to back them up.

It’s a theme that comes up again and again in our conversations with CDOs:
“We’ve built something genuinely good. Clean data, solid governance, trusted analytics. And somehow, the AI conversations are happening everywhere, just not with us leading them.”
This isn’t the first time a technology leadership role has been disrupted by the very forces it helped create.
A few years ago, the Chief Information Officer (CIO) was the undisputed captain of the technology ship. As the top executive deciding which technologies and platforms to deploy company-wide, the CIO held one of the most influential functions in the organization. Then four things happened, more or less simultaneously: developers took over infrastructure, business users deployed SaaS or as first called shadow IT, security broke away into its own C-suite function, and a new leader emerged to own the data layer. That new leader was the CDO.
The CIO didn’t disappear. But their sphere of influence shrank considerably. The role, through some turmoil, evolved from “chief innovation officer” to “chief tech buyer” and more recently to “chief tech enabler”: a coach working across the organization rather than a commander issuing mandates from the top.
The CDO is now facing a structurally similar moment. AI is doing to the data layer what cloud and SaaS did to the IT layer. Will CDOs adapt the way the best CIOs did? Or will they find themselves running the equivalent of the IT helpdesk while the real action happens elsewhere? CDOs who make that shift, from mandate-issuer to trusted enabler, will survive it and even thrive. Those who don’t will replay the CIO playbook to its worst ending.
Over the past several months, we’ve been systematically reaching out to data leaders to understand where they stand. We’ve looked across a meaningful sample (100+) of CDOs, and what we found breaks into three distinct stories, not one.
These CDOs are doing the job they were hired to do in 2018. They’re governing the data catalog. They’re managing pipeline health. They’re running Quarterly Business Reviews (QBRs) on data quality metrics.This by itself is important work.
But they’re not in the room where AI decisions are being made. Some have explicitly framed AI governance as a risk to manage, not an opportunity to lead. They’re largely disengaged from the startup and vendor ecosystem where new AI capabilities are being built.
The research reflects this posture. Among data leaders surveyed, only about 31% say they have a clear, documented definition of AI effectiveness. The majority are operating on informal definitions or none at all. When there is no definition of what AI effectiveness means, it’s hard to own it.
But that’s not the whole story. Because every time an AI system produces an output, someone has to answer for whether it was right. And right now, almost no one is.
These CDOs are aware of the shift. They’re experimenting. They’ve stood up a few AI pilots, they’re starting to think about agent governance, and they’re having conversations with their engineering counterparts.
But they’re largely playing catch-up. Their instinct is to fit AI into the frameworks they already have: govern agents like they govern dashboards, evaluate LLMs like they evaluate data vendors. It’s not wrong exactly, but it’s not fast enough.
The data tells this story clearly. Nearly 75% of organizations surveyed say data quality issues have had a moderate to significant impact on AI-driven business outcomes. Yet when something goes wrong, 27% say it takes days longer than acceptable to identify the root cause. That gap, between the stakes and the response time, is exactly where CDOs should be building. Most haven’t yet.
This is the group with the most to gain from moving quickly, and the most to lose from moving slowly. The question is whether awareness is enough to create urgency.
These CDOs have moved with the current. They’re building things. They’re getting skilled in AI engineering fluency. They’ve expanded their roles from governing data and into deploying agents into production, owning the outcomes for AI transformations. They’ve transformed their teams to be AI-native.
In most of their organizations, they’re the ones setting the pace, not responding to it. This group is small. But they’re the template. So what exactly are they doing differently?
The CDOs who move fast have a genuine advantage. They understand the data layer more deeply than any engineer, and that understanding is exactly what’s needed to make AI agents reliable in production. Here’s what the leaders are doing:
The organizations that are deploying AI agents into production (customer-facing or corporate environment) are quickly realizing that the data layer is the hard part. That the reliability of what goes in determines the reliability of what comes out. That governance, observability, and accountability can’t be bolted on after the fact.
The research makes the stakes concrete. 61% of data leaders report that their monitoring appeared normal while a critical data issue was actually occurring, yet only 25% have automated monitoring or guardrails in place to catch it. Those two facts sitting next to each other tell the whole story: organizations know their data can silently break AI outputs, but most haven’t built the systems to detect it.
The infrastructure CDOs built over the last decade is not a liability, it’s the foundation that enterprise AI is going to run on. The CDOs who recognize that, and move from maintaining that foundation to extending it into the AI layer, won’t just survive the shift. They’ll define what the role means next.
Monte Carlo works with hundreds of data leaders at the intersection of this shift where AI is moving fast and the question of who’s accountable for the output is still being answered.
About the Authors:
Barr Moses is CEO & Co-founder of Monte Carlo, the leading data + AI observability platform. Under Barr’s leadership, Monte Carlo has pioneered the data + AI observability category, partnering with the world’s most data-driven organizations to reduce data + AI downtime and improve reliability at scale. The company is backed by leading Silicon Valley investors, including Notable Capital, Accel, ICONIQ Growth, and Redpoint Ventures. She has been named a Top 25 Data Management & Analytics Executive of 2025, a 2024 top Woman in AI by VentureBeat, and a 2023 Datanami Person to Watch.
Oren Yunger is a Managing Partner at Notable Capital, where he focuses on investments in cloud infrastructure, AI, data, cybersecurity and developer solutions. Before joining Notable, Yunger spent over 12 years in the Israeli startup ecosystem, serving as an executive at a public financial institution and at a SaaS startup. He co-founded two prominent investing communities for leading operators at high-growth technology companies: SVCI, a syndicate of Chief Information Security Officers, and InvestInData (IID), an angel collective for data executives.