Data Management
A Deloitte interview with USAWHC CDO Deepak Shah.
Written by: CDO Magazine
Updated 11:35 AM UTC, May 12, 2026
The U.S. Army Western Hemisphere Command operates in an environment where data strategy must support far more than analysis. It must enable disciplined execution, defensible decisions, and trusted coordination across complex teams and responsibilities. In high-accountability settings, strong data foundations often separate isolated success from scalable enterprise capability.
That reality sits at the center of this second installment of a two-part interview series. Deepak Shah, Chief Data Officer at the U.S. Army Western Hemisphere Command, speaks with Adita Karkera, Chief Data Officer for Deloitte’s Government and Public Services, about what it takes to make data and AI delivery repeatable at scale. Their discussion explores why sustainable execution depends on frameworks, playbooks, and runbooks. It also examines why trustworthy AI relies less on hype and more on disciplined data foundations, governance, traceability, and control.
Part 1 of the conversation focused on how meaningful transformation begins with a clear vision, a mission that turns ambition into action, and goals that demonstrate measurable value over time. This second part shifts from strategy to execution. It examines how organizations institutionalize delivery, operationalize trust, and build systems that can scale consistently across the enterprise.
For Shah, one of the clearest tests of leadership is whether strategy can be repeated beyond a single successful effort. He mentions this as an area he deeply cares about because organizations often confuse isolated wins with scalable execution. In his view, the real work of leadership lies in codifying success so it can travel across teams and persist beyond individual effort.
He mentions that this is the point where strategy either becomes institutionalized or falls apart into one-off heroics. If delivery depends on a few highly capable individuals improvising their way through challenges, then the organization has not truly built a scalable model. What makes delivery repeatable, he says, is the discipline of converting proven approaches into frameworks that can be applied consistently across streams and across the enterprise.
Shah breaks this model into three distinct layers. Frameworks establish the operating structure. They define how work enters the system, how it is governed, and how success is measured. Just as importantly, they clarify accountability across what he describes as the first and second lines of defense, embedding ownership with both business and risk.
“Frameworks establish the operating structure. They define how work enters, how work is governed, and how success is measured.”
Playbooks serve a different but complementary function. They capture best practices and convert them into reusable patterns so that teams do not have to keep starting from zero. In Shah’s formulation, one team’s success should not remain local knowledge. It should become enterprise leverage.
Runbooks bring the final level of operational discipline. They provide step-by-step execution guidance that makes work consistent, auditable, transferable, and less dependent on individual expertise. Shah sees them as essential to making sure critical tasks can be carried out reliably by trained operators, rather than relying on the instincts of a handful of specialists.
“Frameworks, playbooks, and runbooks make outcomes predictable, portable, and scalable.”
A key theme in Shah’s thinking is that enterprise delivery should not depend on exceptional improvisation. He repeatedly contrasts disciplined execution with heroics, suggesting that organizations often celebrate the latter when they should be designing for the former.
For him, frameworks, playbooks, and runbooks do more than document processes. They embed accountability and make execution auditable across the enterprise. That is what allows delivery to move away from ad hoc effort and into a model that can be trusted, repeated, and scaled.
“That’s how execution frameworks, playbooks, and runbooks transform delivery from one of heroics into repeatable, scalable enterprise success.”
This distinction also reveals how Shah defines maturity. An organization is not mature because it can solve a hard problem once. It is mature because it can solve it again, in a controlled way, across teams, and without depending on institutional memory that sits only in a few people.
When the conversation turns to AI, Shah is unequivocal: the trust question is fundamentally a data question. In his view, strong data foundations are what make AI trustworthy in practice rather than merely promising in theory.
“Trust in AI ultimately comes down to data.”
He organizes this idea around three practical anchors. The first is visibility. Organizations cannot trust data they cannot see, which makes metadata, tagging, classification, and labeling essential. Those mechanisms tell leaders what data exists, where it resides, and how it can be used. Shah notes that many organizations invest heavily in AI platforms before they have even built a basic inventory of their data, a gap that undermines trust before any model meaningfully scales.
The second anchor is quality and consistency. Shah argues that this is where many initiatives begin to break down. Without curation, standardization, reference alignment, and continuous monitoring, models ingest fragmented data and return inconsistent outputs. That, he suggests, is one of the main reasons AI can appear powerful in a lab setting but disappoint in production.
The third anchor is accountability and traceability. For Shah, trust requires defensibility. That means lineage, ownership, and clear controls that can answer critical questions about where the data came from and who is accountable for it. In regulated environments, he makes clear, this is not optional.
“Trust requires defensibility, lineage, ownership, and clear controls.”
He points out that some modern data platforms increasingly embed lineage capabilities into their ecosystems, which can materially improve traceability. But he is careful not to let tooling take center stage.
One of Shah’s most consistent messages is that trust is something engineered through discipline. Governance, controls, frameworks, playbooks, and runbooks all matter because they create the conditions under which AI can remain explainable, measurable, and accountable at enterprise scale.
“But trust is not created by tooling alone. It has to be engineered through a strong data foundation, governance, and compliance, you know, governance and control complete that foundation.”
He explicitly rejects the idea that this amounts to bureaucracy. In his framing, this discipline is not administrative drag. It is the protective structure that allows an enterprise to scale AI safely. The absence of such structure does not create agility; it creates fragility.
Shah ties these ideas to what he describes as his 15 foundational pillars, emphasizing that they are not theoretical constructs but operational systems through which trust is built.
In his closing comments, Shah returns to a theme that runs across both parts of the conversation:
“AI success is not about using the next model, the next LLM, or the latest AI platform. It is about building the data foundations, governance, and execution discipline that create trust and enable scale over time.”
That statement serves as the clearest summary of his philosophy. Shah encourages organizations to invest deeply in secure, resilient, trusted data foundations, and to align those foundations around real use cases and repeatable execution models. In his view, that is what allows enterprises not only to adopt AI faster, but to do so “safely, defensively, and sustainably.”
CDO magazine appreciates Deepak Shah for sharing his insights with our global community.