AI News Bureau

How to Evaluate AI Solutions: Truist AI and Data Architect Explains the Value Stream Mapping Approach

avatar

Written by: CDO Magazine

Updated 11:30 AM UTC, April 15, 2026

As AI innovation accelerates, financial institutions are under growing pressure to evaluate, adopt, and scale new capabilities without compromising AI governance, risk, or regulatory expectations.

Truist Financial Corporation operates across retail banking, wealth management, and commercial services, serving millions of clients. In an environment defined by scale, regulatory complexity, and ongoing digital transformation, AI is now a business-critical priority, requiring a clear AI governance strategy.

But as the market floods with new AI solutions, a key question emerges: how do leaders determine what is truly valuable and worth investing in while staying aligned with governance standards?

In this conversation, Sanjay Sankolli, an AI and Data Architect at Truist, speaks with Karan Jain of NayaOne, about how enterprises can evaluate rapidly evolving AI solutions without losing control.

This third part of the series builds on earlier discussions:

Part 1: Why AI initiatives stall and what it takes to operationalize AI in regulated environments

Part 2: Where AI is delivering measurable business impact today

Rethinking AI governance frameworks

One of the biggest misconceptions Sankolli calls out is how organizations approach governance during AI adoption. “The speed at which AI solutions are coming at us, we’ve never seen that before. Every vendor presents solutions with some amount of AI in them.”

In this environment, many organizations default to slowing things down in the name of control. Sankolli argues that instead of positioning governance as a checkpoint that delays progress, it should be embedded into the process from the very beginning.

This shift is foundational. It reflects a broader move toward embedding AI governance principles directly into how innovation is executed.

Not every AI solution deserves evaluation

With an overwhelming number of AI solutions entering the market, Sankolli warns against the instinct to evaluate everything: “There is no need to evaluate every AI solution that’s coming at you. You have to be very intentional.”

That intentionality comes from understanding the business at a deeper level. It requires having “a clear value stream mapped out” and understanding the friction points in the business.

Rather than chasing technology trends, Sankolli suggests anchoring the organizational AI strategy in value stream analysis. This allows them to focus only on solutions that directly address real business problems: “Look at solutions that deliver value in your value stream analysis, and begin pilots with the principle of governing early, not late.”

The organizational Digital Twin: A new standard for AI evaluation

One of the more advanced concepts Sankolli discusses is the idea of creating a digital twin of the organization for AI evaluation. This means building a representation of the enterprise environment, including data assets, regulatory constraints, and operational realities, to test AI solutions before deployment.

“That allows you to evaluate this solution ruthlessly against your environments,” Sankolli says.

This approach shifts evaluation from theoretical to practical. nstead of relying solely on vendor claims or isolated pilots, organizations can simulate real-world impact before committing.

Further, Sankolli draws a sharp distinction between treating AI as a “technology initiative versus an operating model.” Organizations that fail to make this shift risk layering AI onto outdated processes, limiting its impact.

“This innovation is not about just bringing technology in and continuing with your same operating model. It fundamentally questions how you operate your business.” This perspective reframes AI adoption as a business transformation initiative, not an IT project.

AI is ready only when everyone owns it

When it comes to deployment readiness, Sankolli offers a clear and practical definition: “AI is not considered ready when it works. It’s considered ready when everyone agrees they can own it.”

This includes alignment across:

  • Business teams
  • Technology teams
  • Risk and compliance teams

“If they can come out and say they own it, now you have a solution that’s going to truly deliver,” he adds. The implication is significant. Technical success alone is not enough. Organizational alignment is what determines whether AI scales.

The need for early alignment and pre-mortems

A recurring theme in the conversation is the importance of early engagement across stakeholders. Rather than treating business, technology, and risk as competing priorities, Sankolli suggests aligning them around shared outcomes from the start. He also introduces the idea of running pre-mortems instead of relying solely on pilots.

“Run a pre-mortem, not a pilot-mortem, and understand what can go wrong and construct strategies to address it.”

This approach acknowledges uncertainty instead of ignoring it: “You’re not trying to avoid risks; you want to be aware of those risks and manage them effectively.”

By proactively stress-testing failure scenarios, organizations can build confidence across teams.

Transparency as the foundation of trust

Sankolli mentions that another critical factor in building enterprise-wide confidence is transparency. When experimentation is visible and understandable across business, technology, and risk teams, it reduces resistance and uncertainty.

This transparency also enables better decision-making at scale. Additionally, Sankolli highlights a common gap in enterprise AI adoption: procurement decisions made without sufficient evidence.

In a fast-moving AI landscape, this approach is increasingly risky. Instead, he advocates for “evidence-based decisioning,” supported by structured experimentation instead of just pilots.

This also opens the door to more innovative vendor relationships: “This can open the door to more outcome‑oriented engagement models.” The goal is to align vendor incentives with real, measurable outcomes.

Institutional readiness for AI adoption

Ultimately, Sankolli frames AI adoption not as a one-time initiative, but as a capability organizations must continuously develop. “It’s extremely critical for organizations to build that institutional muscle and not treat this as a discrete project approval.”

This includes:

  • Value-driven experimentation
  • Embedded governance
  • Cross-functional alignment
  • Evidence-based decision-making

“You need to perfect that institutional muscle to conduct this value-based experimentation at the same speed at which the innovations are coming at you.”

*Reference: Why Enterprise AI Adoption Is Slower Than the Technology

CDO Magazine appreciates Sanjay Sankolli for sharing his insights with our global community.

Related Stories

June 22, 2026  |  In Person

Chicago CDO AI Forum

Westin Chicago River North

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About