Tech & Innovation in Healthcare

Reader Questions:

Could Threat Actors Use AI Text Generators for Malicious Means? Find out

Question: I’m impressed by the ways artificial intelligence (AI) is being used in healthcare, and I’ve played around with a popular AI text generator software to see how well it could craft an encounter note with minimal information.

With the ease of use and technological capabilities, are there concerns that threat actors could use AI for malicious means?

New York Subscriber

Answer: As with every new technology that is created and released, the potential for malicious use exists. A Jan. 17, 2023, analyst note from the Health Sector Cybersecurity Coordination Center (HC3) explores the potential of AI in developing malware.

AI was a novel concept not too long ago, but now the technology has advanced to the point where malicious threat actors can harness its power to develop effective phishing tactics and build destructive malware. This potential is a serious risk for the healthcare industry. “One of the key factors making AI particularly dangerous for the healthcare sector is the ability of a threat actor to use AI to easily and quickly customize attacks against the healthcare sector,” HC3 explains in the analyst note.

Cybersecurity developers have used AI mostly for defensive purposes. Software developers use the technology to automate security tasks, as well as detect threats, vulnerabilities, and active attacks. However, once it was realized that threat actors are imaginative and resourceful, the cybersecurity community raised concerns “about the potential for [AI] to be used for the development of malware,” HC3 adds.

One example involves the AI-powered software you mentioned. Leading IT companies have expressed concerns about the software’s possible disruption to the market. The software’s abilities and availability could also make it a useful tool for malicious threat actors. Since the software was released in November 2022, several malware development uses have been identified, including:

  • Crafting credible phishing e-mails.
  • Creating potential ransomware scripts.

At the same time, since AI is still a new and evolving technology, cybersecurity experts have yet to develop mitigations and defenses against malicious code, “and it remains unclear if there will ever be ways to specifically prevent AI-generated malware from being successfully used in attacks,” HC3 writes.