Tech & Innovation in Healthcare

Practice Management:

Heed This Expert Advice When Using AI in Hiring Practices

Know that AI could unintentionally violate federal protections.

Picture this — you have a large healthcare organization that receives hundreds of applications for open positions. Going through an inbox full of resumes can seem daunting. You decide to deploy artificial intelligence (AI) software to sift through the resumes to pick out your ideal candidates and streamline your hiring process. However, this simple time-saving technology could end up hurting your organization if the AI software is improperly implemented.

Both AI stakeholders and the Equal Employment Opportunity Commission (EEOC) are asking employers about possible biases hidden within the AI software algorithms, which could unintentionally remove resumes from your candidate pool, and even violate federal protections in the Americans with Disabilities Act (ADA) or the Civil Rights Act of 1964.

AI then and now: AI has been used for years to simplify myriad processes. You encounter it when you use a search engine, autocorrect on your phone, or spellcheck on your computer. In his Senate appearance, OpenAI CEO Sam Altman described an individual with dyslexia who uses a specialized AI tool built on OpenAI technology to help him navigate email and other written communications for his job, including securing a $260,000 grant.

If your practice administers a pre-employment test, like a coding exam, during the hiring process, it’s crucial to keep an eye out for potential accommodations’ requests and know your responsibilities to offer alternative formats.

Understand Who is Advocating AI Regulation and Why

With AI technology progressing so rapidly, some federal agencies and other regulators are more closely scrutinizing its use. In fact, some of the developers of the more advanced AI technology are advocating for extensive regulation, including Altman, who appeared before the U.S. Senate to talk about some of the ways AI can benefit people, as well as its various dangers.

You may think your practice couldn’t be implicated in any AI scrutiny related to ADA or Title VII protections, but if you use the technology, you need to check in with your compliance team.

Brush Up on ADA, Title VII

The labor-relevant portions of the ADA prohibit employers and other entities from discriminating based on disability. Additionally, Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin.

Important: The EEOC says that employers can be held liable for AI biases affecting protected populations in the hiring process — even if they did not personally design the AI software or its respective algorithms. This means employers should review how they or their vendors utilize AI technology in their hiring practices, says attorney Kathryn Jones at law firm Hall, Render, Killian, Heath & Lyman, in online analysis. Additionally, they should take advantage of the technical assistance the EEOC is providing for employers that use AI technology in their hiring process, so they can check — and correct — any disparate impacts on protected populations.

Understand the Implications, Recall Four-Fifths Rule

If you’re now wondering about your own practice’s compliance risk when using AI in the selection process, the EEOC suggests considering the four-fifths rule.

Four-fifths rule: In a technical assistance document, the EEOC mentions the “four-fifths rule,” which is how it’s checking to see whether the rate of selection is “substantially” different from one group to another.

Although the EEOC is careful to designate the four-fifths rule as a “rule of thumb” rather than a hard and fast metric, understanding the calculus is important to check your own compliance. “The four-fifths rule … is a general rule of thumb for determining whether the selection rate for one group is ‘substantially’ different than the selection rate of another group. The rule states that one rate is substantially different than another if their ratio is less than four-fifths (or 80 percent),” the EEOC says.

Note: “Courts have agreed that use of the four-fifths rule is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance. As a result, the EEOC might not consider compliance with the rule sufficient to show that a particular selection procedure is lawful under Title VII when the procedure is challenged in a charge of discrimination,” the EEOC warns.

While the four-fifths rule may not constitute statistical significance, the EEOC still recommends that practice managers ask AI vendors whether they used the four-fifths rule when evaluating whether their product might have an adverse impact on a Title VII characteristic.

Remember: Anyone can file a charge if they feel like they’ve experienced discrimination related to employment. “A discrimination charge is an applicant’s or employee’s statement alleging that an employer engaged in employment discrimination and asking the EEOC to help find a remedy under the EEO laws,” the EEOC says. The EEOC may then begin an investigation and can also assist in confidential mediation to pursue resolution.

Takeaway: While AI is more than capable of helping people with disabilities, AI can also be used to screen and eliminate job applicants or candidates who may need accommodations — a violation of the ADA. If your practice is using, or considering using, AI to help with recruitment, keep this important potential EEOC violation risk in mind.