GenAI's scary potential for medical disinformation


Wednesday, 15 November, 2023

GenAI's scary potential for medical disinformation

A team of medical researchers from Flinders University is advocating for increased vigilance around generative AI (GenAI) after witnessing the technology’s alarming potential for whipping up medical disinformation.

The team used this rapidly evolving form of artificial intelligence in a study to test how false information about health and medical issues might be created and spread.

Using generative AI tools for text, image and video creation, the team attempted to create disinformation about vaping and vaccines. They made use of publicly available generative AI platforms such as OpenAI’s GPT Playground for text, and DALL-E 2 and HeyGen for facilitating the production of image and video content.

In just over an hour, the researchers produced over 100 misleading blogs, 20 deceptive images and a convincing deep-fake video presenting health disinformation. Disturbingly, this video could be adapted into over 40 languages, amplifying its potential harm.

First author of the study Bradley Menz, a registered pharmacist and Flinders University researcher, said he had serious concerns about the findings, given prior examples of disinformation pandemics that have led to fear, confusion and harm.

“The implications of our findings are clear: society currently stands at the cusp of an AI revolution, yet in its implementation governments must enforce regulations to minimise the risk of malicious use of these tools to mislead the community,” Menz said.

“Our study demonstrates how easy it is to use currently accessible AI tools to generate large volumes of coercive and targeted misleading content on critical health topics, complete with hundreds of fabricated clinician and patient testimonials and fake, yet convincing, attention-grabbing titles.”

Menz suggested that the key pillars of pharmacovigilance — including transparency, surveillance and regulation — could serve as valuable examples for managing these risks and safeguarding public health amid rapidly advancing AI technologies.

Senior author Dr Ashley Hopkins, from the College of Medicine and Public Health, said that there is a clear need for AI developers to collaborate with healthcare professionals to ensure that AI vigilance structures focus on public safety and wellbeing.

“We have proven that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Now there is an urgent need for transparent processes to monitor, report and patch issues in AI tools,” Hopkins said.

The paper, ‘Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance’, by Bradley D Menz, Natansh D Modi, Michael J Sorich and Ashely M Hopkins was published in JAMA Intern Med.

Image credit: iStock.com/monsitj

Related News

Patient-specific 3D models to assist in surgery

UNSW engineers have their sights on developing anatomically accurate 3D printed models which...

Alfred Health deploys GE system to optimise operations

The system is designed to enhance situational awareness, communication, and overall operational...

DHCRC project to deliver benchmarking tool for AI in health

The initiative complements efforts by governments, peak organisations, and clinical professional...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd