You may be thinking carefully about the use of artificial intelligence in your agency’s operations, but don’t forget to stay alert to AI use from outside providers that may affect your claims — and compliance status — too. One of the cornerstones of federally covered care is that it is reasonable and medically necessary — and ordered by providers, not a program or algorithm. You run the risk of violating the False Claims Act if AI goes awry, suggest attorneys Brian P. Dunphy and Samantha P. Kingsbury with law firm Mintz, Mintz, Levin, Cohn, Ferris, Glovsky and Popeo. “AI tools could potentially create risk where providers use those tools to order items or services without appropriate provider involvement and oversight,” Dunphy and Kingsbury explain in online analysis.
Federal enforcement agencies like the Department of Justice and the HHS Office of Inspector General have mined data for years to help them investigate fraud and abuse in the healthcare industry. Additionally, in its Strategic Plan, OIG notes that it intends to increase its “detection capabilities by leveraging artificial intelligence (AI) and machine learning to better predict the potential for fraud, waste, and abuse.” Whether you’re just dipping your toe in the AI stream or are all-in, it’s wise to set parameters, establish compliance guidelines, and keep abreast of regulatory reforms in the space. Find the OIG’s five-year plan at https://oig.hhs.gov/ documents/root/7/OIG-Strategic-Plan-2020-2025.pdf.