The emergence of large language models (LLMs) like ChatGPT and Google’s Med-PaLM presage a monumental artificial intelligence revolution in health care. By generating notes, filing forms, aiding diagnoses and more, they could greatly assist physicians and transform patient experiences. However, experts warn reliance on secretive, proprietary corporate models risks severely undercutting medical accountability, privacy and safety. They advocate for collaborative open consortium development of transparent AI tailored for health instead.
Public Excitement Over ChatGPT’s Potential
The November 30, 2022 public release of ChatGPT by leading AI lab OpenAI sparked tremendous excitement about integrating similar natural language processing technology into medicine. Doctors began imagining how they could use ChatGPT and other LLMs to boost efficiency of clinical workflows. Patients became hopeful conditions could be diagnosed quicker.
Behind the scenes, Big Tech and health systems started making big moves to capitalize on the hype:
- Microsoft began collaborating with electronic health records provider Epic to integrate LLMs into workflows at University of California San Diego Health and Stanford Medicine. This allows nuanced study of implementation challenges before wider rollout.
- Google announced extensive partnerships with prestigious institutions like the Mayo Clinic to develop its Med-PaLM model.
- Amazon Web Services launched its HealthScribe clinical documentation tool to automatically generate charts from patient encounters.
- Promising health AI start-up Hippocratic AI raised $50 million to further develop an LLM specifically for medicine.
Concerns Around Reliance on Closed Models
However, various experts have sounded alarms about health systems and government regulators blindly embracing proprietary models like GPT-3 and GPT-4 that underlie ChatGPT without rigorous scrutiny. Handing unchecked power over medicine to opaque corporate interests could severely undermine patient privacy, equity and care.
For one, services could abruptly stop if deemed unprofitable, jeopardizing treatment continuity. The bankruptcy of once-hyped Babylon Health after netting billions illustrates risks around prioritizing profits over patient needs with AI. Strict corporate secrecy also prevents proper external evaluation of critical factors like model safety, accuracy and real-world usefulness.
Additional concerns abound too about medical AI. Tendencies to produce false but convincing outputs known as “hallucinations” remain poorly understood in LLMs. Potentially expensive retraining is also required when foundational knowledge becomes outdated as scientific understanding evolves. Patient privacy violations may also result if models reconstitute and leak sensitive data from past training. Discriminatory biases around gender, race and socioeconomic status require vigilant safeguarding against as well.
A Vision for Open Consortium Models
To truly enhance medicine, AI systems need grounding in ethical patient-centered values, not commercial incentives. That requires transparency.
As such, experts advocate health institutions worldwide band together in an open consortium for developing medical LLMs. Resources could be jointly pooled to create base models trained on publicly available data. Then, members can privately fine-tune with their own patient information to create customized, locally-optimized LLMs aligned with region-specific regulations.
If initiated, involved institutions like Britain’s National Health Service and hospital networks in Europe and the United States could remain firmly at the helm of medical innovation rather than ceding control to Big Tech firms.
Benefits of a Collective Open Approach
This collaborative framework for medical AI confers multiple important advantages:
- Uniform testing of model versions across consortium partners enables robust joint evaluation of factors like reliability and real-world effectiveness difficult for companies to manage alone.
- Local data control makes adhering to country and institutional privacy, consent and other regulations easier compared to centralized models. Access could be revoked immediately if issues emerge.
- Efficient integration with widely used electronic records systems like Epic and Cerner is far simpler without needing to build proprietary pipelines.
- Collective learning about best practices prevents duplicate work re-solving challenges around optimizing model performance, user interfaces and more individually.
Addressing Key Questions
Admittedly, doubts exist whether enough computational resources exist beyond tech giants to compete. But the authors highlight how health systems actually hold more valuable assets overall: troves of real-world patient data perfect for training medical AI. International collaboration could leverage this strength.
However, strict governance is imperative to uphold ethics and privacy. Binding legal guidelines must govern allowable data usage and prevent leakage or misuse. Careful review processes prior to deployment can catch performance issues or demographic biases. Monitoring must continue post-deployment to ensure real-time protection.
Various data sharing models provide helpful templates for unifying access while preventing abuse, including the UK BioBank and MIMIC de-identified ICU database. With time and trial-and-error experience, best practices will emerge.
Next Steps Toward Ethical Open AI
Rather than immediately embrace secretive proprietary AI, experts strongly advise health institutions pool resources to jointly develop transparent models tailored for medicine. Strict open consortium governance can harness AI’s tremendous potential for improving care while also upholding ethics. Prioritizing profits over patients with AI risks jeopardizing progress. With collaboration, AI could truly elevate medicine’s capacity to help people.