Plus, the NIH aims to invest millions into AI research. Did you know a patient’s voice can reveal more about their condition than just how they describe their symptoms? Researchers are using artificial intelligence (AI) to analyze audio recordings of patients to discover vocal biomarkers that can provide indications that more severe conditions could be lurking beneath the surface. Learn how healthcare researchers are using AI and vocal biomarkers detect diagnoses. What are Vocal Biomarkers? A biomarker is a measurable factor in a patient that healthcare providers can evaluate to diagnose conditions, assess treatment options, and gauge a medicine’s response. Biomarkers can act as mileposts or an endpoint on the clinical journey. Researchers have discovered that a patient’s voice also has biomarkers that can serve as a noninvasive measurement of the patient’s condition. Each person’s voice has a unique signature, trait, or a combination of traits that a provider can use to identify a possible condition or gauge the condition’s severity. Researchers are using AI and machine learning (ML) models to analyze audio recordings to detect the subtle changes in the patient’s voice that could indicate changes in the patient’s health. For example, providers can gauge a patient’s mental health by the sound of their voice. Using AI and vocal biomarkers, providers can compare the patient’s voice to a trained dataset of recordings to determine the patient’s condition even if the patient isn’t expressing how they accurately feel. “We can use voice to screen for depression and psychiatric disorders. When you think about people who are depressed, you know that the tone of their voice, the pace of their voice is a lot different,” said Yael Bensoussan, MD, MSc, FRCSC, assistant professor of laryngology in the department of otolaryngology at the University of South Florida in Tampa, Florida. Below is an overview of vocal biomarker identification process: Once the vocal biomarkers are identified, researchers can compare a subject’s sample to the training dataset and control values to screen for certain conditions. Catch CAD With a Noninvasive Study In a study published in 2022, researchers compared patients’ voice recordings to their potential of experiencing coronary artery disease (CAD). The researchers found that people with a high voice biomarker score were more than twice as likely to experience significant issues associated with CAD. In the study, researchers tracked 108 subjects who had received an initial coronary angiogram. The subjects used a smartphone app to make three separate 30-second audio recordings, after which the software would analyze the recordings looking for certain signals in the audio samples. The samples were compared to voice indicators that were previously identified as relating to higher coronary artery pressure. The AI-based system can analyze more than 80 distinct features of audio recordings, which include: Researchers gathered six features that related to CAD from previous studies and used those features to generate a single score for each subject. After studying the subjects for two years, researchers were able to accurately predict CAD outcomes in patients. The subjects with the higher vocal biomarker scores were more likely to experience severe chest pain or coronary issues that would necessitate a hospital or emergency department visit. The study’s lead author, Jaskanwal Deep Singh Sara, MD, who is a cardiology fellow at Mayo Clinic, said in an American College of Cardiology press release, “We’re not suggesting that voice analysis technology would replace doctors or replace existing methods of health care delivery, but we think there’s a huge opportunity for voice technology to act as an adjunct to existing strategies. Providing a voice sample is very intuitive and even enjoyable for patients, and it could become a scalable means for us to enhance patient management.” Fund Future AI Research Projects In September 2022, the National Institutes of Health (NIH) announced the launch of the NIH Common Fund’s Bridge to AI (Bridge2AI) program. Depending on the availability of funds, the NIH aims to invest $130 million over four years “to accelerate the widespread use of artificial intelligence by the biomedical and behavioral research communities.” The program brings together team members from a wide range of backgrounds and disciplines to build tools and collect data that can be used by AI applications. The project also “will ensure its tools and data do not perpetuate inequities or ethical problems that may occur during data collection and analysis.” Possible bias in AI is not a new concept but is one the Bridge2AI program aims to prevent. However, preventing AI bias comes down to the training dataset the researchers compile. “Really, it’s all about the training dataset. If you train a model to diagnose something using 100 males that are White, probably when you present them with a Black female, the accuracy will be wrong,” Bensoussan said. As the Bridge2AI program establishes best practices and creates tools to make data AI-ready, the program will also generate “a variety of diverse data types ready to be used by the research community for AI analyses,” the NIH wrote in a press release. This data will also be able to help improve medical decision making in critical care settings.