Tech & Innovation in Healthcare

Artificial Intelligence:

Break Open the Black Box With Explainable AI

Find out if XAI could improve critical thinking with MDM.

Several artificial intelligence (AI) models are developed to generate results, but the developers may be unsure of how the models arrived at the conclusions. This creates a black box effect. On the other hand, explainable AI (XAI) has the potential to show the developers how a decision is reached. This could improve AI trust while also reducing critical thinking in medical decision making.

Tech & Innovation in Healthcare explored XAI and what the technology could mean for healthcare AI development.

Explain XAI Technology

XAI differs from traditional AI in that it provides an explanation of how the algorithm arrived at the generated results. According to IBM, XAI is “a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.” Essentially, XAI does exactly what its name says, the technology describes the AI model being used along with the anticipated results and any possible biases.

In contrast, traditional AI uses a machine learning (ML) algorithm to generate results. Developers of this model might not be sure of how the system came to the conclusions it produced, which can make it hard to assess the system’s accuracy.

Through an XAI model’s transparency, developers can gauge the model’s decisions by evaluating its accuracy, impartiality, and outcomes. Organizations can then ensure trustworthiness when developing AI technologies.

Developers use three methods to set up XAI:

  • Prediction precision: This helps ensure the AI model’s accuracy with everyday use. Developers can determine the prediction accuracy by comparing the XAI output against the training dataset’s results.
  • Verifiability: Developers use verifiability to limit the ways in which the model’s decisions can be made.
  • Understanding decisions: Educating AI users helps humans understand how the AI makes decisions and why it made those decisions.

The technology is currently in development in several healthcare fields, including evaluating radiology images and examining electronic health records (EHRs), and XAI could help healthcare professionals who are skeptical of the technology’s results to understand how it arrived at the results.

Open the Black Box

Since traditional AI delivers results without an explanation of how it arrived at those conclusions, the model is considered a black box. XAI’s transparency could help provide insights into the AI’s development and crack open the black box.

Implementing XAI in healthcare can open the technology’s black box to help ensure trustworthiness and transparency in AI system development. XAI “aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made,” writes researchers in a 2023 survey of XAI in healthcare.

AI continues to be an extremely popular avenue to providing care to patients, assisting in making diagnoses, developing prescription drugs, evaluating medical imaging, and streamlining revenue cycle management, but using XAI could help improve the technology’s adoption.

“XAI gives reasoning, which will build confidence in the systems. The technology can further validate a clinician’s decision on treatment, making sure the best course of action is taken for patients,” says Jessica Miller, MHA, CPC, CPC-I, subject matter expert at MediCodio in San Ramon, California.

However, like other new technologies in healthcare, providers are reticent to adopt the model.

Understand the XAI’s Challenges to Adoption

Implementing XAI as a mechanism to assist healthcare providers or develop healthcare tools faces several hurdles to adoption. Security, data quality, and the human factor are the most common challenges standing in the way.

  • Security: XAI can require real-time updates of datasets to ensure the decisions made and the generated results are accurate for the conditions. This can lead to unauthorized users or network attacks gaining access to the algorithms to input false datasets.
  • Data quality: Like other AI models in healthcare, ensuring high-quality data will help ensure high-quality results. Humans need to avoid introducing biases (explicit or implicit) into the data and also be aware of any biases in the results that could affect personalized care.
  • Human factor: Certain people are skeptical of new technologies and may prefer not to adopt it for their profession.

The human factor also includes critical thinking from healthcare professionals. If XAI is theoretically supposed to be more transparent and trustworthy, this can lead individuals to have confidence in the technology without doing their due diligence to verify the XAI model’s conclusions.

“I would argue that medicine checks a lot of these boxes, and so it’s not a setting where we want explainability that will turn off critical decision-making skills or engage automation bias,” said Marzyeh Ghassemi, PhD, assistant professor of electrical engineering and computer science and institute for medical engineering & science at Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts, on the “Can AI Be Harmful?” episode of the “NEJM AI Grand Rounds” podcast.

Experts continue to be in agreement that there will always be a need for human intervention with AI — or XAI, for that matter. “Regardless of AI’s abilities, human intervention will always be necessary to verify and validate the AI-generated information. AI integrations will continue to expand as the technology becomes more widely accepted and advances, and opportunities will come for individuals experienced in using, training, and modifying these types of programs,” Miller adds.

Stay tuned to Tech & Innovation in Healthcare for more information as AI continues to evolve in healthcare.