Data Management
Written by: CDO Magazine Bureau
Updated 12:00 PM UTC, Tue June 3, 2025
Naresh Dulam, VP of Software Engineering at JPMorgan Chase & Co., speaks with Or Zabludowski, CEO of Flexor, in a video interview about AI readiness in regulated industries, balancing security and accessibility, navigating emerging solutions for unstructured data lineage, tackling AI challenges, Multi-Context Protocol (MCP), and embracing a federated data approach for AI success.
Dulam begins by pointing out that as governments introduce regulatory frameworks, industries must adapt and ensure compliance. He says that establishing a strong data governance foundation is at the core of AI readiness. This includes data lineage, consistent metadata standards, and transparent governance policies. “Your data needs to have that data lineage. You should be able to explain that this final attribute that you are using to feed into the model is derived from this; this is the source for it.”
Understanding and documenting the origins of data points feeding into AI models is critical. According to Dulam, this lineage not only ensures traceability but also reinforces model transparency and accountability.
Consistency in metadata structure allows systems and stakeholders to interpret data uniformly, which is vital for scalability and integration. Clear and understandable governance policies help promote accountability and can ease regulatory scrutiny, he adds.
Dulam also underlines the need for a delicate balance between data protection and timely accessibility: “The financial institutions must ensure these rigorous security and privacy standards while providing granular, timely access to the high-quality data sets.”
He clarifies that AI readiness is not just about making data available but ensuring control over how and when it is accessed. “When you want it, you should be able to stop it. Otherwise, we don’t know the AI system or how they were going to use that information.”
Granular access controls, supported by existing technologies, are essential for safeguarding sensitive information without stifling innovation, he affirms.
While tools and technologies are important, Dulam argues that AI readiness also requires a mindset shift. He says, “In the financially regulated industries, data readiness for AI is not just about the technology solution; it’s also the mindset change.”
This shift involves building trust among stakeholders and creating transparency through thorough documentation. Additionally, real-time monitoring, not just retrospective audits, is necessary to foster this trust.
Dulam further reiterates that a key aspect of AI readiness lies in a fundamental mindshift — not just the implementation of technology. While there are various solutions available to support data lineage, he notes that the maturity of these tools depends largely on the type of data involved.
“It’s easy with the structured data. You can do that because it was there for a long time, structured databases, etc. But with the unstructured data — we are still in the early stages,” says Dulam.
He states that current efforts are largely focused on simply making unstructured data accessible to models. Although progress is being made, comprehensive data lineage for unstructured data remains a work in progress.
Despite engaging with many vendors and attending conferences, Dulam has yet to see a solution that feels ready for production use. However, he remains optimistic, pointing to developments like vector databases as steps toward more robust solutions in the near future.
Moving forward, Dulam shares his practical approach to addressing AI-related challenges, particularly when it comes to ensuring compliance and transparency. His method revolves around creating a controlled, sandbox environment where potential issues can be safely explored and addressed before moving into production.
By simulating real-world conditions in a test environment, it is possible to identify how and where models may inadvertently expose sensitive information. “That’s the one way I could think of to completely demonstrate the complaints before I put them into production,” he says.
Acknowledging the evolving landscape of AI tools and technologies, Dulam says that while many vendors are developing solutions, the offerings are still maturing. Given this early-stage development, he stresses the need for experimentation. AI, in his words, often involves a trial-and-error process to discover what works best for specific use cases.
Ultimately, his method remains rooted in careful planning and transparency building systems in a way that demonstrates key elements like data lineage and model explainability to regulators and operators.
Thereafter, Dulam draws a compelling parallel between the Multi-Context Protocol (MCP) and the revolutionary impact of HTTP, emphasizing MCP’s potential to reshape AI systems through standardized, secure, and context-aware data interactions.
“MCP represents one of the most fundamental transformational shifts in today’s AI ecosystem. Just as HTTP enabled seamless interconnectivity across systems nationwide, MCP is poised to play a similar unifying role.”
Dulam explains that before MCP, working with multiple models in a unified solution was difficult because of interoperability issues. Each model communicated differently, often requiring separate APIs and duplicated efforts. He credits Anthropic for introducing MCP, a change he believes has fundamentally altered the way data is accessed and interpreted in live environments.
Dulam further highlights that MCP solves several core issues, including redundant API calls, scalability, and security hurdles. It simplifies these challenges by creating a unified and standardized access layer for data interaction.
In regulated industries, governance and compliance are essential. Dulam emphasizes that MCP also brings coherence to these efforts. This unification means compliance controls can be centralized, simplifying implementation across systems: “You have one place to implement the complaints, not multiple places. And make your data available from one place to the various models, everyone talking the same language.”
Finally, Dulam highlights how MCP has practical implications for enterprises, especially those burdened with legacy systems and custom processes. Maintaining an optimistic view of MCP’s future, he says, “Hopefully it’ll become another HTTP for humanity.”
Furthermore, Dulam mentions a major challenge faced by enterprises, wherein each team or business unit often structures and stores data independently. This leads to what he calls fragmented “data ponds” and isolated data lakes. While many organizations have attempted to centralize all data into a single source for AI purposes, he believes this is not the most effective path forward.
Instead, Dulam advocates for a federated data approach, where data remains within its respective business units, each of which understands its data best and can maintain ownership. This method aligns with emerging patterns like data fabric and data mesh architecture, which emphasize decentralization, discoverability, and self-service access.
By using a data mesh model, organizations can ensure better metadata management, enable seamless access to distributed data sources, and maintain agility, says Dulam.
In conclusion, he suggests, “Don’t try to centralize it; let it be the way it is, but see how efficiently you can make your data available for the systems that are looking to access and get the insights out of it.”
CDO Magazine appreciates Naresh Dulam for sharing his insights with our global community.