In an era when generative AI is creating both positive and negative impacts, the use of AI ethically is crucial. Accordingly, the World Health Organization (WHO) has introduced comprehensive “guidance” on the ethics and governance of Large Multi-Modal Models (LMMs) which form the crux of generative AI, especially from a health lens.
LMMs can process diverse data inputs like text, videos, and images to generate varied outputs. Generative AI chatbots like ChatGPT and Bard have been huge successes.
While acknowledging the potential benefits of LMMs in areas such as diagnosis, patient guidance, administrative tasks, medical education, and scientific research, the WHO underscores associated risks, including the potential for generating false or biased information that could adversely impact health decision-making.
The guidance highlights five key applications of LMMs in healthcare, ranging from clinical care to scientific research.
“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr. Jeremy Farrar, WHO Chief Scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”
LLMs can be used for diagnosing and addressing patient queries in clinical settings; secondly, for patient-led exploration of symptoms and treatment options; thirdly, in administrative roles, involving documentation and summarization of patient interactions in electronic health records; fourthly, in medical and nursing education, offering trainees simulated patient encounters; and finally, in scientific research and drug development, facilitating the identification of new compounds.
However, there are innumerable risks related to data quality and bias, potential misinformation, and broader health system challenges. LMMs may foster "automation bias" among healthcare professionals, potentially leading to oversight of errors or improper delegation of critical decisions to AI systems. The WHO calls for collaboration among governments, tech companies, healthcare providers, patients, and civil society in all stages of LMM development and deployment.
The key recommendations to tackle these issues include governments investing in public infrastructure for LMM development, employing laws and regulations to ensure ethical AI use in healthcare, assigning regulatory agencies for LMM assessment, and implementing post-release auditing by third parties.
Accordingly, developers are urged to engage diverse stakeholders in the design process, ensure LMMs fulfill defined tasks accurately, and anticipate potential secondary outcomes for responsible AI applications in healthcare.