In a significant move, the World Health Organization (WHO) has published a comprehensive document outlining crucial regulatory considerations for the use of artificial intelligence (AI) in healthcare. This publication underscores the necessity of ensuring the safety and effectiveness of AI systems, expediting access to these systems for those in need, and promoting dialogue among various stakeholders, including developers, regulatory bodies, manufacturers, healthcare professionals, and patients.
The rapid growth of healthcare data and advancements in analytical techniques, spanning machine learning, logic-based processes, and statistical methods, has the potential to reshape the healthcare sector through AI tools. WHO recognizes the transformative potential of AI in enhancing health outcomes, strengthening clinical trials, enhancing medical diagnostics and treatments, facilitating self-care, and enabling person-centered healthcare. Notably, AI could be particularly valuable in regions with limited access to medical specialists, aiding in the interpretation of medical images like retinal scans and radiographs, among other applications.
Nonetheless, the swift deployment of AI technologies, including large language models, often occurs without a comprehensive understanding of their potential impacts, which can either benefit or harm end-users, including healthcare professionals and patients. When AI systems utilize healthcare data, they may have access to sensitive personal information. Thus, robust legal and regulatory frameworks are vital to protect privacy, security, and data integrity. The WHO publication aims to assist in establishing and maintaining these frameworks.
Dr. Tedros Adhanom Ghebreyesus, WHO Director-General, highlights the dual nature of AI in healthcare, holding great promise while also presenting ethical concerns, cybersecurity threats, and potential biases or misinformation. He notes that this new guidance will support countries in effectively regulating AI to harness its potential for applications such as cancer treatment and tuberculosis detection, while minimizing associated risks.
To address the growing need for responsible AI health technology management, the publication outlines six key areas for the regulation of AI in healthcare:
- Transparency and Documentation: Complete documentation of the product lifecycle and development processes to foster trust.
- Risk Management: Comprehensive consideration of factors like ‘intended use,’ ‘continuous learning,’ human intervention, model training, and cybersecurity threats.
- External Validation: Clear articulation of AI’s intended use to ensure safety and facilitate regulation.
- Data Quality Commitment: Rigorous pre-release system evaluation to prevent the amplification of biases and errors.
- Complex Regulatory Challenges: Understanding the scope and consent requirements of regulations like the GDPR and HIPAA to ensure privacy and data protection.
- Collaboration: Encouraging cooperation between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners to ensure continued regulatory compliance throughout the lifecycle of products and services.
AI systems are intricate and rely not only on their code but also on the data they are trained on, which is often derived from clinical settings and user interactions. One of the key risks addressed in these regulations is the potential for AI systems to amplify biases present in training data, potentially leading to inaccuracies or failure. These regulations aim to ensure that attributes like gender, race, and ethnicity in training data are accurately reported, and datasets are intentionally made representative.
This comprehensive document from WHO aims to provide fundamental principles that governments and regulatory authorities can use to develop new guidelines or adapt existing ones for AI at the national or regional levels, effectively balancing the promise and potential pitfalls of AI in healthcare.