AI News Bureau

Prioritize Governance Frameworks from Day One — JPMorgan Chase & Co. VP of Software Engineering

avatar

Written by: CDO Magazine Bureau

Updated 12:00 PM UTC, Mon June 23, 2025

Naresh Dulam, VP of Software Engineering at JPMorgan Chase & Co., speaks with Or Zabludowski, CEO of Flexor, in a video interview about governance risks in AI, mitigating the risks, balancing automation and human oversight, the need to keep humans in the loop, the role of feedback loops, and prioritizing governance and infrastructure for AI foundation.

Governance risks in AI: Explainability, transparency, and compliance

At the onset, Dulam points out that some of the most pressing governance risks include the explainability of AI decisions. According to him, the inability to clearly articulate how an AI model reaches a decision remains a core issue.

Next, Dulam mentions the lack of transparency and compliance challenges. Explaining further, he says, “If you see any agent or model’s output as a black box, it’s very difficult to explain to the regulators or auditors.”

This opacity triggers the regulators, says Dulam, and the lack of transparency consequently brings on regulatory penalties and reputational harm to organizations. “Additionally, these AI-driven automations unintentionally amplify the bias hidden in the historical data because these models are trained on the historical data,” he notes. Dulam cautions that AI models trained on historical data pose serious ethical and compliance challenges.

Mitigating the risks

To address these challenges, Dulam emphasizes the need for proactive governance strategies. He discusses ensuring clarity around how AI decisions are made and advocates for maintaining robust audit trails and taking deliberate steps to address bias.

Reflecting further, Dulam says that all problems are inherently the same with every technology, but they just show up differently. As humans evolve, the fundamental challenges remain the same; similarly, he reminds that technology may evolve, but the challenges remain the same.

Balancing automation and oversight: Defining boundaries in AI governance

Drawing from his experience in regulated industries, Dulam offers practical insights about how organizations can approach AI implementation responsibly while maintaining compliance and control.

According to him, achieving balance starts with clarity around when AI should take the lead and when human intervention is necessary. “Striking the right balance involves having clear boundaries. Enterprises should establish automated thresholds when they are designing these systems,” says Dulam.

By setting these automated thresholds, businesses can align both regulatory compliance and operational needs, determining when AI agents can operate independently and when human oversight is essential.

Delving further, Dulam stresses the importance of keeping humans involved in sensitive or high-risk decisions. This approach ensures that while AI can handle routine operations, edge cases, and complex scenarios still receive appropriate human judgment.

The role of feedback loops

Dulam advocates for continuous learning through feedback mechanisms. He states that having feedback loops is a great way to improve AI systems over time.

Speaking of where automation should apply, Dulam suggests that full automation is acceptable for low-risk or non-critical use cases, even if occasional errors occur. However, high-stakes environments, especially those governed by strict regulations, must retain a human in the loop.

Moving ahead, Dulam cautions against making broad assumptions about regulation and risk across industries. From an outside perspective, it may appear that most AI use cases are inherently high-risk, but in truth, many are not. While all industries face certain risks when implementing AI, it is ultimately up to business and industry experts to identify where those risk thresholds should be established, notes Dulam.

Laying the for scalable and responsible AI

Thereafter, he highlights the importance of prioritizing governance and infrastructure from the outset when building AI systems. “Prioritize the governance frameworks that address the data quality, the metadata standardization, and the complaints from day one,” Dulam states.

In addition to governance, he points out the need to invest in infrastructure that can support large-scale AI deployments. These include vector databases and contextual search technologies that effectively handle unstructured data at scale. He stresses that these investments should be made with scalability in mind, “not on the POC level, but at the scale.”

Cross-functional collaboration to maintain the continuous alignment on the AI initiatives also plays a critical role, says Dulam. He adds that as businesses constantly evolve, alignment among stakeholders ensures consistent focus and regulatory clarity.

Wrapping up, Dulam states that AI development should go beyond one-off experiments. He states that organizations must build AI pilot projects that go into production and that strong governance is what transforms pilot projects into sustainable, production-grade systems.

CDO Magazine appreciates Naresh Dulam for sharing his insights with our global community.

Related Stories

July 16, 2025  |  In Person

Boston Leadership Dinner

Glass House

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starStay updated on the latest trends

starGain inspiration from like-minded peers

starBuild lasting connections with global leaders

logo
Social media icon
Social media icon
Social media icon
Social media icon
About