The World Health Organization (WHO) has raised concerns about the potential dangers of introducing artificial intelligence (AI)-based healthcare technologies in lower-income countries.
With the explosive growth of large multi-modal models (LMMs) like ChatGPT, the agency called for ethical guidelines and international cooperation to prevent these powerful tools from exacerbating existing health inequalities.
The organization emphasized the importance of not allowing technology companies and wealthy nations to solely shape the development and deployment of AI models. Failure to train these models on data from under-resourced areas may result in poorly serving those populations, exacerbating existing inequities and biases, WHO said.
In a media briefing on Thursday, Alain Labrique, the WHO’s director for digital health and innovation, expressed concerns about the possibility of amplifying social inequalities worldwide due to the leap forward in technology.
“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” Mr Labrique said.
The WHO’s guidelines, issued as advice to member states, aim to ensure that the explosive growth of LMMs promotes and protects public health rather than undermining it.
Jeremy Farrar, the WHO’s chief scientist, acknowledged the potential of generative AI technologies to improve healthcare but stressed the importance of identifying and addressing associated risks.
“Race to the bottom”
These concerns stem from the breakneck pace at which LMMs are being adopted in medicine. These AI models, capable of generating text, videos, and images, are finding uses in tasks like filling forms, writing clinical notes, and even aiding diagnoses.
While potentially game-changing, the WHO warned of a “race to the bottom” where companies prioritize releasing applications quickly, potentially compromising their efficacy and safety. There’s also a mention of the risk of “model collapse,” a scenario in which LMMs trained on inaccurate or false information contribute to disinformation cycles.
The WHO report additionally warns of the potential for “industrial capture” of LMM development, with major companies crowding out universities and governments in AI research.
To avoid such pitfalls, the organization emphasized the need for robust safeguards through:
- Global Cooperation: Governments from all nations must collaborate on crafting effective regulations for LMM development and use. This collaborative approach aims to prevent tech giants from wielding unue influence and ensure equitable access to this powerful technology.
- Inclusive Development: Civil society groups and healthcare recipients deserve a seat at the table. Their participation in LMM development and oversight is crucial to prevent bias and ensure these tools truly serve the needs of all communities.
- Independent Audits: Rigorous, post-release audits by independent third parties are essential to assess data security, human rights implications, and overall effectiveness of deployed LMMs.
- Ethics Training: Embedding mandatory ethics training for LMM developers, similar to what medics receive, is crucial to instill responsible development practices and prevent unintended consequences.
- Transparency and Accountability: Early registration of algorithms, alongside the publication of negative results, can combat publication bias and hype, encouraging responsible research and development.
Discover more from Pluboard
Subscribe to get the latest posts sent to your email.