Intended for healthcare professionals

  1. Kacper T Gradon, associate professor
  1. Department of Cybersecurity, Faculty of Electronics and Information Technology, Warsaw University of Technology, Warsaw, Poland
  1. k.gradon{at}ucl.ac.uk

Urgent measures must be taken to protect the public and hold developers to account

The notion of generative artificial intelligence (AI) has recently dominated public discourse.1 Generative AI uses machine learning to create new data (typically text, image, audio, and video). Its models are trained on vast datasets, and unsupervised learning allows these models to identify patterns and associations within the data, enabling output generation when prompted with natural language descriptions of a user’s desired outcome.2

The implications of generative AI (both positive and negative) occupy a prominent place in academic debate and have become a key topic of cross disciplinary reflection, linking areas seemingly distant from information technologies such as medicine, security sciences, fine arts, psychology, engineering, cybersecurity, ethics, linguistics, and philosophy.3 It is difficult to find a specialty that ignores the potential impact of generative AI on the functioning of individuals and social groups, or humanity in general.4

The linked paper by Menz and colleagues (doi:10.1136/bmj-2023-078538) exemplifies an important approach to the consequences of proliferation of generative AI, acknowledging the opportunities associated with emerging technologies while also recognising the substantial risks.5

In their study, Menz and colleagues focused on the potential of generative AI’s large language models (LLMs) technology to produce high quality, persuasive disinformation that can have a profound and dangerous impact on health decisions among a targeted audience. The authors reviewed the capabilities of the most prominent LLMs/generative AI applications to generate disinformation. They described techniques that enable the creation of highly realistic yet false and misleading content with the potential to circumvent the apps’ built-in safeguards (using fictionalisation, role playing, and characterisation techniques).

Additionally, the authors assessed risk mitigation mechanisms offered by the technology developers and their transparency about the possible abuse of their applications. They highlighted serious challenges related to the lack of any viable and implementable standards requiring technology developers to provide adequate safeguards to prevent their tools from being weaponised by malicious actors to produce and propagate health disinformation.

Disinformation, especially in AI enhanced form, is an increasingly pressing threat, considered to be detrimental to democratic societies6 and presenting substantial challenges to national security.7 It is seen as the leading cybersecurity hazard for businesses, governments, the media, and society as a whole.8 Likewise, the destructive properties of disinformation are evident in the disciplines of medicine and public health, where unverified, false, misleading, and fabricated information can severely affect the health related decisions and behaviours of patients, as acknowledged by the World Health Organization and infodemiology scholars.9

Studies indicate that disinformation has a broader and deeper influence than accurate information, resulting in faster dissemination to users.10 Such a phenomenon can have catastrophic consequences if targeted at vulnerable groups, such as patients with cancer who are searching for a “second opinion” online and falling prey to manipulation, conspiracy theories, and “alternative truths.”11 Menz and colleagues’ study will raise awareness among all relevant stakeholders about the devastating impact that generative AI enhanced medical disinformation can have on patients and their treatment choices.

Importantly, Menz and colleagues highlight another problem arising alongside the abuse of generative AI tools by malicious actors: the conspicuous lack of responsibility taken by technology developers regarding the potential harm caused by their products. The technology itself is “beyond good and evil,” but it always has a potential to be hijacked, recalibrated, and weaponised.12 It is the responsibility of developers and deployers to implement effective safeguards into their products to prevent, prohibit, or mitigate the threats associated with misuse and malicious exploitation.13

The need for responsible and ethical implementation of generative AI solutions so that their potential for harm is minimised must be recognised, acknowledged, and constantly improved by the engineers of LLMs,14 especially in areas such as health information where the consequences of abuse are greatest. The indifference, lack of transparency, and unresponsiveness of generative AI companies to deal with the vulnerabilities of their own inventions is one of the more disturbing aspects of Menz and colleagues’ study.

The rapid advance in generative AI technologies (including the deep fake potential for impersonation in AI generated audio and video material15) requires a comprehensive approach to ensure responsible and ethical use. Stricter regulations are vital to reduce the spread of disinformation, and developers should be held accountable for underestimating the potential for malicious actors to misuse their products. Transparency must be promoted, and technological safeguards, strong safety standards, and clear communication policies developed and enforced. These measures must be informed by rapid and comprehensive discussions between lawyers, ethicists, public health experts, IT developers, and patients. Such collaborative efforts would ensure that generative AI is secure by design, and help prevent the generation of disinformation, particularly in the critical domain of public health.

Footnotes

References