Rehgan Avon

If your company is not in a highly regulated industry, like pharma or finance, it likely does not have strict quality controls around the ethical design and use of artificial intelligence. There is a rather complex process to design, build and publish machine learning algorithms. There may be standards around validating that the algorithm runs continuously and produces accurate results. The problem here is that the term “quality” is up for interpretation.

Any team working on building these algorithms will decide if and where it should be used after it passes all of the tests that they have designed. The issue is that most organizations do not have tests designed to understand the implications of their algorithms at scale. As highlighted in the book “Weapons of Math Destruction” by Cathy O’Neil, the red flag is not around how accurate a model is, but rather, how that model is used. For example, you could have an incredibly accurate model that can predict who is most likely to go back to jail. The ethical considerations are around how that model is used. Does it alert authorities to monitor those individuals more closely? Or does it proactively provide resources for those individuals to lessen the likelihood of recidivism.

A few months ago, I had the opportunity to discuss this topic with criminologist, criminal psychologist and AI ethicist, Renée Cummings. She is also the founder of Urban AI where she helps organizations design inclusive AI, build responsible and trustworthy AI strategies, and develop and govern AI policies. We covered the topic on mitigating bias during the data collection phase and how that significantly impacts the quality of algorithms. If the data collected is not fully representative of the individuals that eventually will interact with the algorithm on some level, it will not perform as well for those missing in the data.

We see this in a very high-profile example with facial recognition technology. Joy Buolamwini has been nationally recognized for her work in exposing the discrimination within the algorithms used to detect people’s faces. This technology performs significantly worse with people of darker skins tones and specifically with women. She shares her findings in more detail during her TED talk. Last year, she also discussed this technology’s impact on society at large in front of congress. When she dug into this issue further, she found that these algorithms were trained using a dataset of predominantly white males. In addition to the training data there is also an issue with the hardware, or cameras. Cameras are required to capture someone’s face and run that data through the algorithm. If the hardware introduces errors in capturing the data fully, the algorithm does not have a chance of performing as expected, even if it were trained with representative data. These points are only referencing the accuracy, or outcome, of the algorithm. The use of this algorithm is also of concern. Autonomous vehicles are equipped with cameras to detect objects and pedestrians it must avoid. Police units are using facial recognition to identify suspects from street cameras. There are serious implications to “getting it wrong.”

This can quickly become a large-scale issue. With a simple deployment of an algorithm into software systems that are regularly used, algorithms can impact a massive portion of our population very quickly. If we do not rethink the definition of “quality” and what tests these algorithms must pass before they are deployed, they can do significant damage.


Rehgan Avon is the co-founder and principal of Ikonos Analytics, a solutions company that provides organizations with the architecture and framework to advance their analytical capabilities through aligned and incremental improvements. With a background in integrated systems engineering and a strong focus on analytical technology, Avon has worked on architecting solutions and products around operationalizing machine learning models at scale within the large enterprise. Her previous experience has been fueled by a passion for early-stage startups and product development, holding positions as Head of Solutions at Mobikit, Solutions Architect at ModelOp, and Lead Data Engineer at Clarivoy. She also teaches courses in python, data science, and web development at MBA programs nationally through Cognitir, an education startup company.

Avon is the founder and CEO of Women in Analytics, an organization that increases visibility to women making an impact in the analytics space by providing a platform for women to lead the conversations around the advancements of analytical research, development and applications. She remains active and involved in fostering collaboration around emerging analytical methods and technologies. She is also the recipient of Columbus CEO’s Future 50 award.