AI in Action: Trustworthy Explainable-AI or Trustworthy-Explainable AI?

AI in Action: Trustworthy Explainable-AI or Trustworthy-Explainable AI?
Published on

Artificial Intelligence (AI) delivers a wide range of reliable information that could not have been sensed or detected by human intelligence (HI) or the traditional computational approaches from data, without the help of an AI system.

With this reliable large space of information, human confidence and intelligence may be enhanced so that a robust decision-making environment may be established.

Therefore, the adaptation of AI-driven solutions to solve our data science problems would be a smart decision in the sense that it can help find better alternative solutions that might be latent but exist in the domain. However, we must be cautious about how much we can trust the AI systems, their explainability, and how easy it is to interpret the explainability.

This is where the trustworthy explainable AI and its unique properties of explainability become useful and helpful for bridging the gap between AI and HI!

1. AI explains and HI interprets

The “AI in Action” framework consists of artificial intelligence, human intelligence, and efficient and effective communication between them. It can deliver AI-driven solutions for the successful operation of businesses.

AI systems can make decisions, but they need to be transparent, presenting their outcomes and decision-making process in a form interpretable by humans. This enables HI to understand the results, acquire new knowledge, and build confidence.

We also expect an AI system to be fully transparent with its outcomes and the decision-making processes since they are crucial to our AI-driven solutions. The solutions are expected to contribute significantly to the successful operations of businesses, industry, and financial and educational institutions.

These requirements led to the introduction of explainable AI that potentially initiated the explainability of AI models on their outcomes and the process of decision-making, by suppressing the characteristics of the black-box nature of an AI system.

If we want to develop robust AI-driven solutions and adopt explainable AI with confidence, the AI system should be trustworthy-explainable.

In other words, can we fully interpret and trust the explanations of an AI system, while acquiring new knowledge from its outcome and decision-making process, and improve our intelligence and confidence that can help establish accurate, reliable, and consistent operational business decisions?

2. Trusting the AI with confidence

“Trustworthy explainable AI” can be described as explainable AI that emphasizes the trustworthiness of an AI system itself. On the other hand, it can also be described as robust explainable AI that rigorously focuses on the trustworthiness of the explanations of an AI system and the enhancements of human interpretability.

Therefore, we can group the explainable AI systems into trustworthy explainable-AI and trustworthy-explainable AI (or trustworthy-explainability AI).

The focus of the current AI systems is the development of trustworthy explainable-AI that can deliver trustworthy decisions. However, what we need is trustworthy-explainable AI that can place a high emphasis on explainability such that the interpreted explainability of the AI’s decision is robust and can help enhance our confidence and intelligence to make better business solutions and decisions.

In other words, trustworthy explainable-AI delivers trustworthy decisions, but trustworthy-explainable AI is expected to deliver more robust and comprehensive trustworthy-explainability of the decisions (or outcomes).

The rigor of trustworthy computing on explainability and its interpretability with an AI model is almost nonexistent in the current AI systems. Hence, we need to make sure the human interpretation of the explainability of the current AI systems is well-understood for making better business operational decisions.

3. Innovating explainable AI

In the current framework of explainability, methods have been developed as post hoc explainers that can explain the decision-making processes that are latent inside the black-box nature of an AI system.

The two most popular and widely used explainers are the Local Interpretable Model-agnostic Explanations (LIME) and the SHapley Additive exPlanations (SHAP). They just rely on the outcomes that the AI systems generate. In other words, these explainers are post hoc explainers that explain only the outcomes of the AI systems. Hence, there is a need for us to develop innovative solutions regarding the trustworthiness of the explainability of AI systems.

To innovate explainable AI, we need rigorous computational techniques that prepare the AI systems during their development for enhancing their explainability characteristics such that the post hoc explainers can utilize them to generate trustworthy explainability. Thus, the trustworthiness is innovatively integrated.

Additionally, the current explainable AI systems mainly focus on the outcomes of the AI models and their explainability. The innovative idea is the development of a list of possible failures and their explainability, in addition to the original outcome and its explainability. This predicted list of possible failures of operational solutions will allow business leaders to analyze and develop robust operational solutions.

One such approach called the negation-based explainable AI (nEGXAI) has been recently published in the AI literature. The nEGXAI extends the large range of reliable information of any AI system to a near-infinite solution space at its development phase and analyzes the latent variables that can influence the successes and failures of the possible outcomes for better explainability to develop a trustworthy-explainable AI system.

I strongly believe that without prioritizing the trustworthiness of the explainability of AI systems in AI-driven solutions, businesses adopting these less transparent systems face greater risk in their successful operation.

About the Author:

Dr. Shan Suthaharan is a Professor of Computer Science and Graduate Program Director at UNC Greensboro. He gained more than 30 years of experience as a director, scientist, author, inventor, consultant, and educator. As an experienced senior scientist, he possesses expertise and skills in developing machine learning and artificial intelligence models, algorithms, and systems for real-world applications that include medical sciences, cybersecurity, high-dimensional systems, and masked language modeling.

He is an innovative director who leads and manages full cycles of end‑to‑end machine learning development process for multiple funded projects that focuses on descriptive learning from structured and unstructured data, feature engineering, predictive modeling and analysis, prescriptive modeling, performance tracking, and documentation. He is also an effective communicator who has the demonstrated skills to translate complex problems and technical results to peers, healthcare professionals, data scientists, and machine learning engineers.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech