AI News Bureau
Written by: CDO Magazine Bureau
Updated 7:31 PM UTC, Mon November 25, 2024
(US & Canada) JoAnn Stonier, Fellow of Data and AI at Mastercard, speaks with Ramon Chen, Chief Product Officer at Acceldata, in a video interview about data governance in the age of AI, the need for data observability in AI, how the U.S. is catching up with the EU AI Act and other governance policies, her work in AI and data policy, and advice for financial services organizations.
Mastercard Inc. is an American multinational payment card services corporation that offers a range of payment transaction processing and other related payment services.
Speaking of data governance in the age of AI, Stonier states that at Mastercard, data and AI governance go hand-in-hand. Since the inception of the General Data Protection Regulation (GDPR) in the European Union (EU), incorporating data governance has become paramount to comply with other emerging regulations, she adds.
Initially, Mastercard started data governance as a host of different gates that one needs to walk through for product development and design while integrating it into the process, says Stonier.
With the advent of AI, the firm realized the need to beef up its review processes due to data bias in the results. Mastercard relied on AI for fraud algorithms and cybersecurity, which are critical given the firm’s role in payment processing. Adding on, she states that AI has accelerated the speed and efficiency of the processes.
However, the product development and data governance processes reflected the initial governance approach. But, with the coming of generative AI, all the processes had to be redesigned, says Stonier. She adds that although the firm had started exploring tools to solidify governance practices before GenAI, using an effective tool is not sufficient by itself.
A lot of judgment is involved in interpreting outcomes, evaluating the data being used, assessing its sensitivity, and determining the appropriate course of action, says Stonier. Especially while navigating a new territory, it is critical to understand that the data used can be highly sensitive or can result in sensitive outcomes. For instance, accusing someone of fraud or doing something that impacts someone’s credit score.
Stonier says that even if Mastercard builds an algorithm that the bank applies, it still has to be thought through from a governance perspective. For data governance to be successful, the organization must understand and access it, she notes. According to Stonier, governance allows executives to understand AI and the risks associated with it in different ways. Whether it is the enterprise architecture team, data scientists, privacy, or legal teams, they all have a varied set of risks that, when understood, make governance a powerful conversation.
Moving forward, Stonier states that security is one of the critical aspects of GenAI. She cautions that while most firms are putting it to use, bad actors are doing the same. Therefore, having data observability provides a better pathway to understanding the processes and potential entry points for bad actors to get in. Having a thorough understanding of data governance and data observability can make organizations better prepared for certain potential incursions where GenAI can duplicate legitimate actors.
When asked how the U.S. is keeping up with the EU AI Act, Stonier states that the Act takes a risk-based approach, delegating implementation to individual nation-states. She believes the U.S. is in alignment with this risk-based approach.
U.S. regulators here are still figuring out how to implement AI regulations in a way that balances protection with fostering innovation, says Stonier. Unlike the EU’s more structured approach, the U.S. tends to favor innovation and focuses on consumer protection rather than outright banning certain AI applications. This balance aligns with the sectoral regulation model that the U.S. uses, particularly in industries like financial services and healthcare, where regulation is understandably stringent because of the stakes involved.
Delving further, Stonier states that U.S. firms also widely adopt NIST standards while taking cues from international efforts like the U.N.’s high-level advisory board report. In sectors like finance and healthcare, these protections are robust, while industries like manufacturing rely on broader consumer product safety laws.
Although the U.S. might appear behind in some ways, it has solid sectoral regulations and strong consumer protection laws that are driving firms toward responsible AI practices, says Stonier. Many U.S. companies are global and must comply with international regulations, often adopting the highest standards. For instance, MasterCard aims to meet the strictest requirements across jurisdictions to ensure trust with customers, regulators, and employees.
Shedding light on her work in AI and data policy, Stonier states that her work tends to have a pragmatic focus. She mentions collaborating with the Data and Trust Alliance to create practical guidance to help organizations comply with a New York City law that required regulating the use of AI in HR activities.
Additionally, Stonier has also worked with the World Economic Forum to define and apply the concept of data equity. Along with WEF, she has helped organizations understand that achieving data equity requires a series of corrective actions based on their specific data sets and goals.
To support this, Stonier affirms developing a framework focused on three key pillars: The data itself, the people involved, and the purpose of its use. This framework includes attributes organizations should consider to ensure equitable and responsible use of data.
On top of that, Stonier advises firms on how to practically think about how policies should impact day-to-day practices as data and AI practices evolve in the firms.
Sharing nuggets of advice for data leaders of financial services organizations, Stonier urges everyone to understand that while everybody is learning at the same time, one has to lead the learning. She maintains that for financial firms, ROI is one of the key metrics, but this is the time to get creative.
Furthermore, Stonier recommends collaboration with the CFO and the CEO to figure out the organizational strategy for GenAI. This includes what is to be achieved, what are the differentiators related to business strategy and the data required to fuel it, and the AI strategy.
Financial services as an industry is at a pivotal moment when numerous financial products are being tailor-made according to different consumer needs and financial firms. With AI providing access to customer information, financial firms can create personalized products and services unlike before.
Thereafter, Stonier encourages everyone to embrace the change and read everything they can get their hands on. Among some of her favorites, she mentions enjoying Kara Swisher’s podcast and reading Freedom by Timothy Snyder, and recommends “Artificial Integrity” as an intriguing read.
In conclusion, Stonier states that it is essential for everyone in the field to comprehend what an exciting time it is to be an AI and data professional. She notes that it is still the beginning and massive changes are yet to unfold.
CDO Magazine appreciates JoAnn Stonier for sharing her insights with our global community.