AI News Bureau

AI and Data Architect at Truist on Why Successful AI Pilots Can Be a False Signal and What It Takes to Scale

avatar

Written by: CDO Magazine

Updated 1:13 PM UTC, April 2, 2026

Truist Financial Corporation, one of the largest financial institutions in the U.S., serves millions of customers across retail, commercial, and wealth segments. Operating at this scale means managing vast volumes of data while navigating strict regulatory requirements, complex legacy systems, and deeply embedded business processes.

Within this environment, Sanjay Sankolli, Chief Architect for AI and Data at Truist, works to turn fragmented data ecosystems into decision-ready intelligence. In this first part of a four-part conversation with Karan Jain, Founder and CEO of NayaOne, Sankolli examines why many AI initiatives stall after promising pilots and what it truly takes to operationalize AI in a regulated enterprise.

From isolated intelligence to decision-ready AI

Sankolli’s journey reflects a broader shift happening across the industry, from traditional data transformation to AI-led operating models.

“Organizations need to harness intelligence hidden in siloed data ecosystems, or those islands of automation.”

He explains that enterprises have evolved from traditional BI to machine learning–powered analytics, and now to frontier models and agent-driven systems. But expectations have accelerated faster than organizational readiness.

“It was a natural progression to move from data transformations into AI-led transformation, allowing organizations to move from isolated intelligence to just-in-time intelligence that AI can act on.”

The implication is clear: AI is no longer about insights alone. It is about embedding intelligence directly into decision-making workflows.

A key mistake: Treating AI as a technology project

One of the most consistent failure patterns Sankolli sees is how organizations frame AI initiatives: “Most experiments treat AI as a technology project, not an operating model change.”

This misalignment shows up in three critical ways:

  • Weak data foundations that cannot support production-grade AI
  • Disconnected pilots that fail to reflect real business environments
  • Underestimated regulatory and compliance complexity

As a result, early success in controlled environments rarely translates into enterprise value. “If the pilot ecosystems don’t represent operational realities, that’s a significant issue when you begin your journey from experiments to scale.”

The “Last Mile” problem: Where AI breaks down

While pilots often demonstrate strong performance, the transition to production introduces a completely different set of challenges.

Sankolli outlines a key insight: “Success requires everything around the model — infrastructure, governance, people, and process integrations.”

In practice, this is where most initiatives fail.

What changes from pilot to production?

  • Data reality emerges: Pilot environments rarely reflect the true state of enterprise data.
  • Regulatory scrutiny increases: AI systems face ambiguity and evolving expectations from regulators.
  • Integration complexity explodes: Connecting models into existing systems becomes significantly harder.
  • Ownership becomes unclear: Without defined accountability, execution stalls.
  • Change management is overlooked: Organizational adoption is often treated as an afterthought.

“It’s the state of data that actually derails successful AI solution rollouts.”

Vendor optimization vs. enterprise reality

A critical tension highlighted in the conversation is the mismatch between how vendors design solutions and how enterprises deploy them. “Vendors optimize their solutions for winning the pilot, but success from pilot to production requires everything around the model.”

Vendors typically operate within controlled environments designed to showcase performance. Enterprises, however, must deal with:

  • Legacy infrastructure
  • Regulatory constraints
  • Scale and latency requirements
  • Cross-functional dependencies

This gap creates friction the moment solutions move beyond experimentation.

Regulatory uncertainty is slowing progress

From the NayaOne perspective, Jain highlights a growing trend across financial institutions: “Firms are starting to over-index on regulatory uncertainty — where ‘no’ becomes a safer answer than ‘yes.’”

This defensive posture can stall innovation entirely. However, Jain points to a shift in how organizations are addressing this:

  • Bringing regulators into early-stage evaluations
  • Grounding discussions in actual risks rather than hypotheses
  • Improving shared understanding of where risks actually exist

This approach helps move governance from a blocker to an enabler.

The rise of multi-vendor evaluation models

Another notable shift Jain mentions is how enterprises evaluate AI solutions. Rather than relying on a single vendor, organizations are increasingly testing multiple vendors in parallel.

“Nine out of ten times, they pick a different one, because they’re evaluating multiple vendors against the same problem.”

This approach offers several advantages:

  • Reduces vendor bias
  • Improves comparative insights
  • Exposes different problem-solving approaches
  • Strengthens internal decision-making frameworks

It also shifts power away from vendors and toward enterprise-defined evaluation criteria.

*Reference: Why Enterprise AI Adoption Is Slower Than the Technology

CDO Magazine appreciates Sanjay Sankolli for sharing his insights with our global community.

Related Stories

March 25, 2026  |  In Person

New York CDO Financial Forum

New York Marriott Downtown

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About