Assurance labs can put providers’ minds at ease. On April 4, 2023, the Coalition for Health AI (CHAI) announced its first blueprint for responsible and effective use of artificial intelligence (AI) in healthcare. In the document, the organization lays out guidelines to ensure equitable and high-quality healthcare while developing and deploying AI systems. What is the Coalition for Health AI? Founded in 2021, CHAI is a group of members consisting of MITRE, Duke Health, SAS, Stanford Medicine, Change Healthcare, Mayo Clinic, Microsoft, and more. According to the organization’s website, its goal is to “build a consensus-driven framework” by defining core principles and criteria for health AI developers, users, and healthcare organizations, and boost the credibility of health AI systems. AI and machine learning (ML) development processes are known to have inherent biases, which are due to the quality and type of information gathered and used to train the systems. By laying out efficiency and fairness aims up front, many healthcare professionals believe that the biases can be stopped in AI and ML, so all patients and healthcare operations can enjoy the benefits of AI in care and treatment.
Understand the Blueprint’s Goals With its first version of the blueprint, CHAI aims to create guidelines and guardrails regarding AI tools. According to the organization, the guidelines established by the blueprint will help ensure: “It’s important to me, to MITRE, and to CHAI’s mission that we ensure we’re building health AI that is more trustworthy, credible, and transparent,” said Brian Anderson, MD, chief digital health physician at MITRE and co-founder of CHAI, during an April 5, 2023 webinar. Ensure Equity During AI Development CHAI’s accepted public comments on its guidelines until October 2022, but that’s not the only AI guidance document available for review. The White House Office for Science and Technology Policy (OSTP) released its Blueprint for an AI Bill of Rights in October 2022. The AI Bill of Rights contains a framework to help guide the design, development, and deployment of automated systems, so as to protect the rights of American citizens. When CHAI’s blueprint is compared to the OSTP’s AI Bill of Rights, the two documents align closely to “[provide] guidance and guardrails for vendors, medical providers, and clinicians to identify and mitigate risks of the use of AI in healthcare,” said Alan E. Mislove, PhD, assistant director for data and democracy, White House OSTP. The AI Bill of Rights establishes five core principles that “everyone should be entitled to when it comes to AI and automated systems,” Mislove added. These core principles are as follows: At the same time, the National Institute for Standards and Technology (NIST) published its AI Risk Management Framework (RMF) on Jan. 26, 2023. The NIST AI RMF describes four key functions that are critical to designing, developing, and deploying AI systems and tools. These functions are map, measure, manage, and govern. CHAI’s blueprint builds on the guidance established by the OSTP and NIST; furthermore, the organization is collaborating with the National Academy of Medicine (NAM) Leadership Consortium’s AI Code of Conduct. “I think we do understand that we have a rare moment in time to align and it’s imperative that we do so. [AI] itself is agnostic to the promise and the peril. We shape its destiny, but only if we coalesce, collaborate, and partner at a scale like never before,” said Laura Adams, senior advisor in the Leadership Consortium for NAM. While AI has exploded in use over the past few months, the need to establish guidance has never been more evident. As CHAI developed its framework and listened to the public, the organization realized the need for assurance standards to ensure fairness and equity. Use Assurance Labs to Make AI Equitable The need for quality and equitable data is prevalent throughout AI and ML development. Without data that represents all facets of patients, systems won’t be built that can cater to the vast patient population. As a result, healthcare providers will be unable to accurately use the technology for diagnosis, treatment, and care planning. For health AI to be widely successful, the information must be deep and diverse enough to create data models that are useful to everyone. At the same time, the data and the algorithms must be monitored over time to ensure they’re still useful and not causing harm. “What was discovered was there’s this need for an independent assurance lab that can really evaluate the processes and the tools from the different health systems, the tool developers, and vendors to show and ensure the ethical use of AI,” said Gil Alterovitz, PhD, FACMI, FAMIA, director of biomedical cybernetics laboratory at Harvard Medical School. Dr. Alterovitz added that an advisory body will be able to ensure health equity across several different health systems, including those that are small, rural, or have constrained resources. A registry of assurance lab data will help providers, patients, payers, and pharmaceutical companies understand how an algorithm has been tested and guaranteed to know if the algorithm is useful for its purpose and if it will provide a useful result. Acknowledging the need for assurance standards, CHAI aims to go beyond establishing the blueprint by creating an assurance standards guide. This additional tool will institute guidelines for AI-enabled healthcare solutions. “This guide will be a “step-by-step breakdown of what considerations developers, users, and others involved should be aware of when designing, developing, implementing, and deploying AI-enabled technologies for health at scale,” said Nicoleta Econonou, PhD, director of Algorithm-Based Clinical Decision Support (ABCDS) Oversight at Duke Health. Stay tuned to Tech & Innovation in Healthcare for more information as AI continues to evolve in healthcare.