Data Management
Verizon CDO Kalyani Sekar explains how the company is modernizing its data stack and aligning teams to deliver trusted AI at scale.
Written by: CDO Magazine Bureau
Updated 12:00 PM UTC, Wed June 25, 2025
Verizon, one of the world’s largest telecommunications companies, serves over 143 million wireless subscribers in the U.S. alone and is a global leader in 5G innovation, network reliability, and enterprise connectivity. As data becomes foundational to its digital transformation, Verizon continues to modernize its infrastructure and strategy to meet the scale and complexity of enterprise AI.
In this third and final installment of our three-part series, Kalyani Sekar, Chief Data Officer at Verizon, joins Yali Sassoon, Co-founder and CTO of Snowplow, to discuss how Verizon is evolving its data and technology stack to support next-generation AI initiatives.
While part 1 focused on embedding data quality, governance, and observability at scale, and part 2 explored how operational data is helping Verizon shift from reactive to proactive decision-making, this conversation dives into the architectural and governance challenges of enabling enterprise-wide AI. Sekar explains why scaling AI is not just about compute power or models but about managing multimodal data, automating pipelines, aligning governance across departments, and ensuring consistent data quality from source to consumption.
Edited Excerpts
Q: You’ve done significant work to scale Verizon’s infrastructure and prepare for AI. Beyond infrastructure, what changes have you made to the technology stack to support your future data and analytics goals?
We’re moving away from siloed platforms that serve individual business units toward a scalable, distributed data ecosystem that supports the entire enterprise. This new architecture enables real-time cross-functional data collaboration, and we’re modernizing to meet the demands of compute, storage, and scale, especially with the high volume, velocity, and variety of data we handle.
As AI evolves, so does the nature of data. We’re no longer just dealing with structured datasets. With deep learning, generative AI, and agentic AI, we now manage multimodal data – images, videos, PDFs, text, and more. Our platforms must store, process, and serve these complex data types in subseconds. Additionally, with the rise of knowledge graphs and vector-based data, we need infrastructure that supports advanced formats beyond just numbers and characters.
Scalability and reliability become critical in this environment. That’s why we’ve industrialized our practices, automating data pipelines to manage unstructured and multimodal data efficiently. We’ve built processing frameworks that integrate continuous data quality monitoring, observability, and lineage management.
Most importantly, we’re creating business-ready data products that are reusable across the enterprise. These products provide a single version of truth which is critical for ensuring consistency, trust, and value at scale.
Q: Looking ahead, what are the primary challenges you anticipate in ensuring robust data governance and quality as you scale AI and analytics initiatives?
As a large enterprise, governance at Verizon varies across departments. Each team often has its own governance frameworks, so creating a unified enterprise-wide approach requires deep understanding, careful interpretation, and cross-team collaboration. It’s a balancing act between central consistency and local flexibility.
Being a regulated organization, we also face the complexity of translating new federal, state, and international regulations into actionable data policies. Misalignment or inconsistent governance practices can easily emerge if we’re not vigilant.
From a data quality standpoint, tracing lineage from source to consumption is always challenging, not due to poor practices, but because of the inherent complexity in how data flows across systems. Quality definitions can vary drastically between source systems and consumption layers, creating a need for tight coordination across teams.
Another persistent challenge is ensuring data remains in sync across different reporting environments. If not, it leads to misinterpretation, confusion, and time-consuming efforts to explain discrepancies. As the data engineering team at the center, we’re constantly navigating these complexities to keep the ecosystem reliable and trusted.
CDO Magazine appreciates Kalyani Sekar for sharing her insights with our global community.