VIDEO | Three Arc Advisory Principal: It's Difficult to Build Good AI Models That Are Not Harmful

VIDEO | Three Arc Advisory Principal: It's Difficult to Build Good AI Models That Are Not Harmful

(US and Canada) Meghan Anzelc, Principal at Three Arc Advisory, speaks with Asha Saxena, Founder and CEO of WLDA.tech, in an International Women’s Month special video interview about improving AI algorithms, biases in large language models, and the beneficial uses of AI. This interview is part of CDO Magazine’s special series celebrating women in data science, machine learning, and analytics.

Discussing feedback loops and how they can improve algorithms to identify biases, Anzelc says that it is easy to build artificial intelligence models but difficult to build good ones that are useful and not harmful. She adds that ChatGPT made large language models accessible, but there is still a lot of complexity and thoughtfulness that needs to go into how they are used.

Anzelc highlights studies by Kieran Snyder, CEO of the augmented writing tool Textio, which show that ChatGPT-generated job descriptions showed race, age, and gender bias. Similarly, it included gender information while generating performance reviews even when no gender information was included in the prompt. She also mentions instances where low-pay labor was hired to label harmful information to train the ChatGPT model to display fewer harmful outcomes.

Regarding the beneficial uses of ChatGPT-like models, Anzelc says that while there are many business use cases, having a human in the loop is still critical. It is essential to be thoughtful when reviewing the output. She also mentions uses such as summarizing topics, generating first drafts of the executive summary for a deck, and querying data in Tableau by simply writing a sentence.

CDO Magazine appreciates Meghan Anzelc for sharing her insights and data success stories with our global community.

See more from Meghan Anzelc 

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech