Safe use of large language models in medicine

Create and comply with clear legal requirements

23-Aug-2024
Computer-generated image

Symbol image

In "The Lancet Digital Health", researchers led by Professors Stephen Gilbert and Jakob N. Kather from the Else Kröner Fresenius Center for Digital Health at Technische Universität Dresden provide an overview of the strengths and weaknesses as well as the regulatory challenges of current healthcare applications based on large language models (LLMs). They call for a framework that does justice to the capabilities and limitations of these AI applications and emphasize the urgent need to enforce existing regulations. A continued hesitant approach on the part of the authorities not only endangers users, but also the potential of future LLM applications in medicine.

Opportunities and risks of LLM-based healthcare applications

Applications based on artificial intelligence (AI) are now an indispensable part of medicine. Large language models such as GPT-4 offer a wide range of support options in the diagnosis, care and support of patients. At the same time, there are concerns about their safety and regulation. Results often vary, are difficult to understand and carry the risk of fabricated statements (hallucinations). The approval of LLM-based applications for medical purposes as medical devices under US and EU law therefore poses challenges for the relevant authorities. Despite the risks of misdiagnosis or unverified medical advice, such applications are already available on the market. Since the introduction of large language models, developers such as Google, Meta, OpenAI and Microsoft have continuously improved them and introduced new models. Their performance is also improving for medical applications.

"LLMs have the potential to transform healthcare and are becoming increasingly important in the areas of cancer diagnosis and treatment, as well as screening, remote care, documentation and medical decision support. They offer great potential, but also harbour risks," says Prof. Dr. med. Jakob N. Kather, Professor of Clinical Artificial Intelligence at the EKFZ for Digital Health at TU Dresden and oncologist at the Carl Gustav Carus University Hospital in Dresden. Intensive research is still being carried out to determine whether the advantages or disadvantages outweigh the disadvantages in medical applications. Aside from the many possibilities and opportunities, the researchers clearly point out in their publication that there are still legal and ethical issues, particularly with regard to data protection, the protection of intellectual property and the problem of gender and racial prejudice.

Approval as medical devices required

As soon as applications offer medical advice to laypersons regarding the diagnosis or treatment of illnesses, they are considered medical devices under EU and US law and as such require a corresponding license. While compliance with these regulations was relatively clear for previous, narrowly defined applications, the versatility of LLMs poses regulatory challenges for the authorities. Despite legal ambiguities, however, such applications are unregulated and available on the market without approval from the competent authorities. The researchers make it clear that authorities have a duty to enforce applicable rules. At the same time, they should ensure that appropriate frameworks are developed for the testing and regulation of health applications based on large-scale language models.

"These technologies already exist on the market and we need to consider two fundamental things to ensure the safe development of such applications. Firstly, appropriate methods are needed to evaluate these new technologies. Secondly, the applicable legal requirements must be enforced against unsafe LLM applications on the market. This is essential if we want to use these medical AI applications safely in the future," says Prof. Dr. Stephen Gilbert, Professor of Medical Device Regulatory Science at the EKFZ for Digital Health at TU Dresden.

Note: This article has been translated using a computer system without human intervention. LUMITOS offers these automatic translations to present a wider range of current news. Since this article has been translated with automatic translation, it is possible that it contains errors in vocabulary, syntax or grammar. The original article in German can be found here.

Original publication

Oscar Freyer, Isabella Catharina Wiest, Jakob Nikolas Kather, Stephen Gilbert: "A future role for health applications of large language models depends on regulators enforcing safety standards"; The Lancet Digital Health, 2024.

Other news from the department science

Most read news

More news from our other portals

Fighting cancer: latest developments and advances