AI News Bureau
Written by: CDO Magazine Bureau
Updated 8:00 AM UTC, Thu May 22, 2025
Naresh Dulam, VP of Software Engineering at JPMorgan Chase & Co., speaks with Or Zabludowski, CEO of Flexor, in a video interview about his role, AI challenges in enterprises, metadata, data freshness and governance for AI success, and unifying structured and unstructured data to unlock AI potential.
Over the course of his career, Dulam has led data-driven initiatives across sectors such as finance and healthcare, primarily tackling complex challenges related to data governance, quality, and scalability.
In his current role, he is dedicated to building practical and resilient data and AI platforms that drive tangible business impact.
Reflecting on the current state of AI adoption in enterprises, Dulam highlights a recurring trend he calls the “shiny object syndrome.” According to him, this phenomenon is nothing new, and it has followed every major technological shift over the years.
Next, Dulam draws attention to a key issue he believes is holding back many organizations from truly leveraging the power of AI: Fragmented data silos and poor data quality.
He emphasizes that while enterprises are often enthusiastic about adopting AI technologies, their excitement can blind them to the foundational work required to make these tools effective. This includes essential tasks like data cleaning, contextual alignment, and establishing robust governance structures.
One of Dulam’s major concerns is the lack of unified data standards across departments, leading to inefficiencies and inconsistencies. “I often see the different teams managing their data sets without unified standards, and this results in discrepancies and inefficiencies in the process,” he says.
Without addressing these foundational data challenges, Dulam warns that enterprises are risking more than just failed projects. There’s a broader risk to trust and belief in the potential of AI itself.
“Without addressing these foundational issues first, the enterprises are risking putting their AI models in production.” Dulam cautions that failed implementations may cause stakeholders to question AI’s effectiveness and even revert to older, traditional methods. “Data quality is the main struggle with enterprises today, even with the advanced AI model capabilities available.”
Reflecting on his experience in the field of AI implementation, Dulam recalls a telling example from the telecom industry that underscores the critical importance of data quality in AI success. He describes a customer churn prediction project that began with significant enthusiasm. However, the optimism quickly faded as serious issues emerged post-deployment.
Upon investigation, it became clear that the problems stemmed from poor data quality. The model had been trained on data that was riddled with flaws:
In short, the model was deprived of the contextual depth it needed to make accurate predictions. The consequences were significant, as the AI solution produced incorrect churn predictions, leading to poorly targeted marketing campaigns and, ultimately, customer dissatisfaction.
“This clearly highlights how essential clean and context-rich data is, even if you have the most powerful models out there available,” says Dulam. He adds that as advanced models, including reasoning and multimodal models, continue to evolve, the importance of data quality cannot be overstated.
“If you don’t have the right data to train these models, your AI initiative is not going to be successful.”
Moving forward, Dulam emphasizes a critical but often overlooked factor: Data freshness and metadata enrichment. He explains that understanding the validity window of data is vital, especially in regulated industries where rules change frequently.
Dulam further stresses the importance of tracking the temporal validity of datasets, helping ensure that models are trained on information that reflects the current regulatory or operational context. “That enhanced metadata will help you to have the right data available to your model.”
When working with unstructured data, says Dulam, it is important to store metadata such as embeddings and build a semantic layer on top of that data. This allows AI agents or models to efficiently access and use the data they need.
At the same time, governance should not be overlooked, he notes. Establishing standardized methods for accessing data is essential, and this is where the MCP plays a role later in the process. By maintaining consistent metadata and governance rules across the organization, organizations can effectively link structured and unstructured data.
Speaking of structured and unstructured data, Dulam highlights the need for a fundamental mindset shift among data leaders. “Data leaders must shift their view of seeing this unstructured data as an isolated asset, and it should be part of the holistic data ecosystems,” he affirms.
Historically, unstructured data such as text, images, and videos has often been ignored or only superficially handled due to a combination of technological limitations and lack of integration strategies, says Dulam. However, he points out that this landscape is changing rapidly.
Due to advancements like embeddings, vector databases, and enriched metadata, organizations now have the tools to bring unstructured data into the fold. Furthermore, he mentions “building in a semantic context layer so that the data can be understood by everybody.”
Adding a semantic layer is especially critical for making both structured and unstructured data intelligible and contextually rich. However, Dulam notes that none of this is sustainable without strong governance frameworks that ensure standardization and consistency across data types.
He maintains that the way data is delivered to AI models matters, and governance plays a central role in ensuring that consistency. “Combining this unstructured data with structured data will amplify your AI’s contextual information. So that will help you to unlock the value of both the structured and the unstructured data,” Dulam adds.
Highlighting a growing trend in the data infrastructure space, Dulam mentions the coming up of vector database solutions. He shares about working with a major vendor developing a unified platform to handle both data types seamlessly.
Wrapping it up, Dulam shares a sentiment that he believes is foundational to any successful AI initiative. “Your AI is as good as your data. So have a data strategy first, then go for AI.”
CDO Magazine appreciates Naresh Dulam for sharing his insights with our global community.