AI News Bureau
Written by: CDO Magazine
Updated 12:00 PM UTC, April 22, 2026
As enterprises move beyond AI experimentation, a clear divide is emerging between organizations that are scaling AI and those that are stuck in cycles of pilots and fragmented adoption. The difference is not technical capability. It is AI governance and organizational readiness.
For large, regulated financial institutions, this shift is increasingly underway across the industry. In this final part of a four-part series, Sanjay Sankolli, an AI and data architect at Truist, speaks with Karan Jain of NayaOne, about the operating model shifts required to move from isolated success to enterprise-wide scale.
For Sankolli, scaling AI is not primarily a technology challenge. It is an organizational one. “AI scales when innovation and governance stop being opposing forces. They start operating as one team,” he states.
He outlines three critical shifts that organizations must make.
“Organizations benefit from a unified AI steering function that brings your business, your technology, your risk and compliance functions, your security, and have a seat at the table and have decision-making authority.”
Risk and compliance must move from being a late-stage checkpoint to an embedded capability. “They can’t be an afterthought and have to be embedded. Look at risk and compliance as a guardrail, not as the gating criteria.”
Perhaps the most difficult is aligning incentives across teams that traditionally operate with competing priorities.
“Development teams are incentivized by velocity, business teams by value, and risk and compliance teams by regulatory expectations. But these don’t necessarily have to be conflicting priorities.”
Instead, Sankolli argues for shared ownership of outcomes across all functions: “Risk teams have to equally own the velocity and also own the outcomes that business is trying to achieve.”
A recurring challenge raised in the discussion is decision latency many organizations recognize but struggle to quantify: decision latency. Years of layered processes, disconnected workflows, and siloed accountability have created environments where decisions move slowly, often delaying innovation.
Sankolli acknowledges the issue but offers a nuanced perspective: “Some of the decision latency is self-regulating. It allows you to catch some of those risks.”
In traditional models, slower decision-making can act as a natural safeguard. But in an AI-driven environment, this becomes unsustainable.
“The fact that you need to now reduce the decision latency does mandate seeing governance as a guardrail and embedding it from day one.”
The shift is clear. Governance cannot remain external to execution. It must move alongside it.
As organizations rush to deploy AI, Sankolli challenges a common assumption: that success is defined by the volume of AI initiatives. “The institutions that win won’t be the ones with the most AI. It would be the ones whose data, people, and decisions are ready for AI.”
This reframes the conversation entirely. Metrics like the number of deployed models or agents become less relevant than the organization’s ability to integrate AI into how work actually gets done. At the center of this shift is data.
“Data is a foundational capability, not a project. It’s an enterprise asset that needs to be treated as a foundational capability,” Sankolli explains.
He also points to a critical evolution in how intelligence is delivered across the enterprise: “You need to move away from isolated intelligence to intelligence on demand.”
In practical terms, this means enabling real-time, trusted, and actionable decision-making. “Can you get the decision just in time, in a state that you could trust and act on it?”
This is what separates AI as a tool from AI as an operating capability.
Transitioning to AI as an operating capability requires more than technical investment. It demands enterprise-wide alignment. “The organizational commitment has to be there. It has to be a lot bigger than a technical ambition,” Sankolli says.
This commitment must span every layer of the organization: “Your board, senior leadership, business teams, technical teams, and risk and compliance team all need to be aligned to that common outcome.”
And that outcome is clear: embedding AI into how the organization operates, not treating it as a separate initiative.
For CDOs and technology leaders navigating rapid innovation, vendor overload, and evolving regulation, Sankolli offers pragmatic guidance.
“Organizations increasingly feel pressure to move fast. But make sure your guardrails move with you.”
This requires a layered approach to governance, starting small and evolving based on risk. “Start with a thin governance layer, and then add layers, based on the risk profile.”
“Make your data foundation the first investment, not an afterthought.”
Ownership is critical here, particularly in complex technology ecosystems that often include external vendors. “The organization has to own the data pipelines, the governance frameworks, and the model monitoring infrastructure to drive trust.”
Without this ownership, reliance on opaque, black-box solutions can introduce uncertainty.
Finally, Sankolli emphasizes trust as the ultimate enabler of speed. “Build trust through transparency, not perfection. Engage everyone early and align their outcomes, and that allows you to move fast.”
“That’s when you can harness significant value out of AI,” Sankolli concludes.
CDO Magazine appreciates Sanjay Sankolli for sharing his insights with our global community.