AI News Bureau
Written by: CDO Magazine Bureau
Updated 12:11 PM UTC, Fri April 4, 2025
Prasannal Nithyanandam, Head of Data and Advanced Analytics at Volkswagen Financial Services, speaks with Jack Berkowitz, Chief Data Officer at Securiti, in an engaging video interview about the organizational journey with AI governance, AI risks, and its responsible enablement, challenges of evaluating vendor solutions, and prioritizing customer trust in responsible AI governance.
Volkswagen Financial Services acts as a global sales driver for the Volkswagen Group’s brands and supports them in strengthening customer loyalty by offering a wide range of mobility services.
Balancing technology and driving business value excites Nithyanandam and keeps her thrilled as a data leader. “We are in this cross-section of what this disruptive technology could bring to the table,” she says.
Standing at the intersection of emerging opportunities with generative AI, it was initially critical for Nithyanandam to reflect on what would be an acceptable use of AI. She says that as an organization, it was paramount to equip employees with the right information and guidance.
“So we started with first setting up an AI policy. What acceptable use meant,” says Nithyanandam.
Following the establishment of the policy, the next phase involved creating a dedicated AI working group. The creation of this group went beyond mitigating technology risks—it emphasized the importance of collaboration, she adds. With various functions such as infosec, legal, privacy, compliance, and data science, the group brought together SMEs to address diverse aspects while fostering cross-functional alignment.
Nithyanandam shares that AI governance also strengthened the organization’s data governance foundation. With an existing framework already in place, the goal was to extend its scope beyond data alone. The AI working group helped address that need.
The third step was about “being able to give a framework for the organization.” According to Nithyanandam, establishing the AI policy, rallying cross-functional collaboration, and setting up an AI working group was central to the AI governance journey.
Once the policy and group were established, Nithyanandam focused on gaining visibility into how AI was already being used across the organization.
For her, understanding the current AI landscape included both internal machine learning platforms and automation tools like UiPath, where bots and RPAs are active. The next step was to enable Microsoft Copilot.
With strong interest from both business units and employees seeking productivity gains, the team prioritized internal-facing use cases and employee productivity.
To support this responsibly, Nithyanandam’s team established a risk assessment process based on the NIST framework that included:
A structured questionnaire for each use case
A general risk model categorizing risk as low, medium, or high
“We established a questionnaire and made sure we had a general risk model guidance of low, medium, and high. So we just scored them, and based on whether it’s a low or medium,” she affirms. Further, this team provided a recommendation on whether to go forward and approve the usage of Microsoft Copilot, for instance.
This process marked the beginning of developing key governance building blocks, including processes and procedures tailored to AI use. Rather than creating entirely new systems, Nithyanandam’s approach was to embed AI-specific questions into existing workflows like vendor onboarding and risk assessments.
“We had to retrospect by taking a look at our existing onboarding process and vendor risk assessments. The key question was: how can we embed AI-specific risk questionnaires in the existing process?” she says.
Additionally, Nithyanandam adds, “We want to make sure we’re asking the business and the teams the right questions as they are onboarding a new program or new project.”
This results in a governance model that works both prospectively and retrospectively—scanning ahead to understand current initiatives and “then going back retrospectively to what we have in our four walls, which is a discovery.” She insists on capturing present initiatives and evolving from there.
Nithyanandam also discusses the challenges of evaluating vendor solutions, particularly those hosted on private clouds or delivered as SaaS. Unlike public cloud platforms like AWS, GCP, and Azure, assessing vendor solutions internally can be challenging due to a lack of visibility.
This limitation reinforces the importance of asking vendors the right questions regarding certifications and evidence of compliance, she explains. During audits, the organization must defend its position and cannot transfer risk to the vendor simply because it’s a third-party product.
“We are also focusing on the right set of questions and evidence that we need to start collecting when we are dealing with vendors,” says Nithyanandam.
According to Nithyanandam, responsible AI governance ultimately comes down to one guiding principle: customer trust. She maintains that every decision around AI and risk needs to be assessed through the lens of customer experience.
This mindset isn’t new, she continues. “So putting them in the center of it, which was already the goal of a data privacy program, only amplifies the need for it even more.”
As regulatory frameworks such as the Colorado AI Act or the Utah AI Act evolve, she sees AI governance following a similar trajectory as data privacy laws, which started in California and have since expanded to more than 25 states.
“The regulations are going to get even tougher, and we recognize that there’s a NIST framework to start with the controls that we could start testing and making sure we are complying,” remarks Nithyanandam.
With so many regulatory variations, it’s crucial to evaluate existing tools and identify additional requirements beyond traditional data lineage.
Concluding, she emphasizes that it’s paramount to ensure controls are maintained and demonstrably effective. However, she cautions, “It’s very expensive to do that manually, and having tools to have the alerts and monitoring will help the organizations to stay on top of it.”
CDO Magazine appreciates Prasannal Nithyanandam for sharing her insights with our global community.