AI News Bureau
Written by: CDO Magazine Bureau
Updated 11:00 AM UTC, Fri October 31, 2025
As AI continues to evolve across industries, few institutions face the pressure of balancing innovation with compliance as much as financial services. Mizuho Americas, the U.S. arm of the Tokyo-based Mizuho Financial Group, stands at the intersection of global finance and technology, managing over $2 trillion in assets globally and offering corporate and investment banking, capital markets, and research services to some of the world’s largest enterprises.
This second installment of the three-part series turns to how Mizuho is embedding responsibility and governance into its AI strategy. JC Lionti, Chief Data and Analytics Officer at Mizuho Americas, joins Amy McNee, Senior Vice President of Solutions Architecture and Technical GTM at Informatica, to explore how the bank is developing a conservative yet forward-looking AI governance framework—one designed to ensure innovation progresses in step with evolving regulatory and ethical expectations.
The first part of the series examined how Mizuho is translating AI potential into practical business value, rethinking GenAI’s role, aligning stakeholders, and redefining the modern data leadership function.
“AI governance is necessary, let’s put it this way,” Lionti begins. “Our regulators made it very clear that they are paying attention as to how we and our peers are managing the risk associated with introducing AI and different models into our decision-making processes.”
Mizuho’s approach has been deliberately conservative. The company began developing its AI governance and risk management frameworks before deploying any large-scale AI-based solutions. Every project, whether traditional machine learning, generative AI, or emerging agentic models, must pass through a rigorous vetting process before even being initiated.
This process, while initially slowing down AI adoption, has ultimately accelerated confidence and clarity. “It may have been delayed a little bit when we got started,” he admits, “but I think now we’re reaping the benefit of that discipline.”
To maintain oversight without bureaucratic bottlenecks, Mizuho has established structured yet agile governance bodies.
“We have a dedicated AI forum that specifically looks at new solutions that leverage these technologies,” Lionti explains. The forum operates on a weekly cadence, focusing primarily on risk management and operational assurance. Each session reviews new initiatives, assesses risk evaluations from ongoing projects, and tracks the progress of AI models already in development.
Complementing this is a use case prioritization group, composed of senior leaders who evaluate the strategic relevance and business value of proposed AI initiatives. Both forums operate under the Data Executive Committee, the overarching body that governs all data-related activities across Mizuho Americas.
By integrating governance, prioritization, and execution under a single structure, Mizuho ensures that its AI initiatives are aligned with both regulatory requirements and corporate strategy, without over-engineering processes.
A key principle guiding Mizuho’s AI journey is defining success before a project begins. “Before starting or approving a new solution, we’re very clear as to why we’re doing it, what the value we expect to generate is, and how we’re going to measure it,” Lionti explains.
Each use case has its own success metrics, but the institution prioritizes user engagement and value realization as its core measures. “If this is a customer-facing solution, user traffic is very important,” he notes.
Lionti emphasizes an iterative, checkpoint-driven development model: “We have tollgates as we develop a solution to make sure we’re still proving our case. If it doesn’t work, we cut our losses early.” This pragmatic approach has paid off—”So far, we’ve been pretty successful,” he adds.
Mizuho’s disciplined governance hasn’t come without challenges. The first and perhaps the most significant, according to Lionti, was risk perception. “What risk do these types of solutions introduce, and how do we get people internally comfortable using them?” he says. Early concerns around model accuracy, hallucinations, and unintended bias demanded proactive education and transparency.
“It was a lot of education that had to be done as to how an AI model works, what we need to pay attention to, and how this differs from how we manage model risk today,” Lionti explains. Tailored training sessions, combined with open discussions about regulatory expectations, built trust and fluency across the organization.
Data quality posed another hurdle. Mizuho’s teams had to focus on minimum viable products (MVPs) aligned with available and trustworthy data, iterating as better datasets became available. “How do you go about focusing on an MVP that you can achieve based on what you have? That was pretty key,” he notes.
Finally, developing AI fluency across the enterprise became a strategic necessity. Mizuho launched a series of training programs for targeted teams, alongside company-wide communication campaigns led by senior executives. “There were a lot of communication efforts from very senior leaders saying, ‘Come and take advantage of these free trainings we sponsor at the enterprise level.’”
CDO Magazine appreciates JC Lionti for sharing his insights with our global community.