What it is, why you need it, and best practices. This guide provides definitions and practical advice to help you understand and establish explainable AI in your organization.
Explainable AI (XAI) refers to a set of techniques and processes that help you understand the rationale behind the output of a machine learning algorithm. With XAI, you can meet regulatory requirements, improve and debug your models, and have more trust in your AI models’ decisions and predictions.
Interest in XAI is accelerating rapidly as organizations realize the need for insights into the decision-making process of opaque or “black box” AI models. These insights bring the five main explainable AI benefits below:
1) Better decision-making by understanding how to influence predicted outcomes. In the XAI example below, your model has generated likely outcomes regarding customer churn based on your data.
With XAI, you also get interpretable and transparent explanations for the decisions made by your AI models. In this case, you see prediction-influencer data to explain these outcomes at the record-level. This helps you know how to influence the predicted outcomes. This example shows how the SHAP explainability tool (explained below) tells you that the top 6 features of your product account for 78% of influence on customer churn. You can use this insight to make changes to your product or service and reduce churn.
2) Faster AI optimization by monitoring and evaluating your models. In the XAI example below, you have transparency into which model is the best performing, what the key drivers are, and how accurate the model is.
With black box models, you don’t have this kind of transparency. And when they fail, it can be very difficult for you to identify why the model did not perform as you’d expected.
3) Raise trust and reduce bias in your AI systems by being able to check models for fairness and accuracy. XAI explanations show the patterns your model found in your data. This helps your MLOps team (machine learning operations team) trace any errors and evaluate for bias and data integrity.
4) Increase adoption of AI systems as your organization, customers, and partners gain more understanding of and trust in your ML and AutoML systems. Your AI models can then power your predictive, prescriptive, and augmented analytics systems.
5) Ensure regulatory compliance as the reasoning behind your AI–based decisions can be audited to ensure conformity with the growing slate of laws and regulations. For example, California’s recent Consumer Privacy Act (CCPA) stipulates that your users be able to know conclusions made about them by your AI systems and know what data was used to make those conclusions.
Download the AutoML guide with 5 factors for machine learning success
There’s no single, optimal way to explain the outputs of a machine learning algorithm. There are three main approaches to consider: global vs local, direct vs. post hoc, and data vs. model. Your choice of approach will depend on the requirements of your ML pipeline and who is consuming the explanations (e.g. a data scientist, regulator, or business decision maker).
Global vs local refers to the scope of the explanation:
Direct vs post hoc refers to the way your model is designed to provide explanations:
Data vs model refers to the type of explanation being provided:
Having your XAI provide explanations that are both accurate and easy to understand involves many challenges.
XAI models can be:
Here are key best practices of implementing Explainable AI (XAI) in your organization:
Lastly, you should ensure that your XAI models adhere to the four principles of explainable artificial intelligence as defined by the National Institute of Standards (NIST):
The specific XAI techniques you employ depends on your problem, the type of AI model you use, and your audience for the explanation. Below are the main XAI techniques used to produce explanations that are both accurate and easy to understand.