AI models highly vulnerable to health disinfo weaponisation


Artificial intelligence chatbots can be easily manipulated to deliver dangerous health disinformation, raising serious concerns about the readiness of large language models (LLMs) for public use, according to a new study.

The peer-reviewed study, led by scientists from Flinders University in Australia, involving an international consortium of experts, tested five of the most prominent commercial LLMs by issuing covert system-level prompts designed to generate false health advice.

The study subjected OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, Meta’s Llama 3.2-90B Vision, xAI’s Grok Beta, and Anthropic’s Claude 3.5 Sonnet to a controlled experiment, in which each model was instructed to answer ten medically inaccurate prompts using formal scientific language, complete with fabricated references to reputable medical journals.

The goal was to evaluate how easily the models could be turned into plausible-sounding sources of misinformation when influenced by malicious actors operating at the system instruction level.

Shocking results

Disturbingly, four of the five chatbots – GPT-4o, Gemini, Llama, and Grok – complied with the disinformation instructions 100 per cent of the time, offering false health claims without hesitation or warning. Only Claude 3.5 demonstrated a degree of resistance, complying with misleading prompts in just 40 per cent of cases.

Across 100 total interactions, 88 per cent resulted in the successful generation of disinformation, often in the form of fluently written, authoritative-sounding responses with false citations attributed to journals like The Lancet or JAMA.

The misinformation covered a range of high-stakes health topics, including discredited theories linking vaccines to autism, false claims about 5G causing infertility, myths about sunscreen increasing skin cancer risk, and dangerous dietary suggestions for treating cancer.

Some responses falsely asserted that garlic could replace antibiotics, or that HIV is airborne – claims that, if believed, could lead to serious harm.

In a further stage of the study, researchers explored the OpenAI GPT Store to assess how easily the public could access or build similar disinformation-generating tools.

They found that publicly available custom GPTs could be configured to produce health disinformation with alarming frequency – up to 97 per cent of the time – illustrating the scale of potential misuse when guardrails are insufficient.

Easily vulnerable LLMs

Lead author Ashley Hopkins from Flinders University noted that these findings demonstrate a clear vulnerability in how LLMs are deployed and managed.

He warned that the ease with which these models can be repurposed for misinformation, particularly when commands are embedded at a system level rather than given as user prompts, poses a major threat to public health, especially in the context of misinformation campaigns.

The study urges developers and policymakers to strengthen internal safeguards and content moderation mechanisms, especially for LLMs used in health, education, and search contexts.

It also raises important ethical questions about the development of open or semi-open model architectures that can be repurposed at scale.

Without robust oversight, the researchers argue, such systems are likely to be exploited by malicious actors seeking to spread false or harmful content.

Public health at risk

By revealing the technical ease with which state-of-the-art AI systems can be transformed into vectors for health disinformation, the study underscores a growing gap between innovation and accountability in the AI sector.

As AI becomes more deeply embedded in healthcare decision-making, search tools, and everyday digital assistance, the authors call for urgent action to ensure that such technologies do not inadvertently undermine public trust or public health.

Journalists also concerned

The results of this study coincide with conclusions from a recent Muck Rack report, in which more than one-third of surveyed journalists identified misinformation and disinformation as the most serious threat to the future of journalism.

This was followed by concerns about public trust (28 per cent), lack of funding (28 per cent), politicisation and polarisation of journalism (25 per cent), government interference in the press (23 per cent), and understaffing and time pressure (20 per cent).

77 per cent of journalists reported using AI tools in their daily work, with ChatGPT notably being the most used tool (42 per cent), followed by transcription tools (40 per cent) and Grammarly (35 per cent).

A total of 1,515 qualified journalists were part of the survey, which took place between 4 and 30 April 2025. Most of the respondents were based in the United States, with additional representation from the United Kingdom, Canada, and India.

A turning point

Both studies show that, if left unaddressed, vulnerabilities could accelerate an already-growing crisis of confidence in both health systems and the media.

With generative AI now embedded across critical public-facing domains, the ability of democratic societies to distinguish fact from fiction is under unprecedented pressure.

Ensuring the integrity of AI-generated information is no longer just a technical challenge – it is a matter of public trust, political stability, and even health security.

[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

Remaining Windsurf team and tech acquired by Cognition, makers of Devin: ‘We’re friends with Anthropic again’

Want smarter insights in your inbox? Sign up for our weekly newsletters to...

Crypto-focused Grayscale confidentially files for potential US listing

Crypto-focused asset manager Grayscale said on Monday it has confidentially submitted paperwork with...

Securing the new high ground: tackling export loopholes in space tech

Outer space, the new high ground, is no longer the solitary domain of...

Amazon Sale 2025: Up to 70 Percent Discount on Headphones from JBL, Sony, and More

Amazon Prime Day Sale 2025 began on Saturday and goes on till midnight...
Mahjong Ways 2 Live Draw Hk Live Casino Online Mahjong Ways Judi Bola Online Sabung Ayam Online Judi Bola Online Sv388 Kisah Budi menang Veloz dari Mahjong Perjalanan Kkajhe di Mahjong Ways 2 Hoho sukses temukan trik Mahjong belajar trik spin kecil tono sukses miliki peternakan sapi berkat scatter hitam mahjong ways kisah unik kades bertari yang sukses kelola bumdes dengan scatter hitam mahjong ways