ICMR Release Guidelines for AI Use in the Health Sector | 23 Mar 2023
For Prelims: ICMR, Artificial Intelligence.
For Mains: Ethical Guidelines for Use of AI in the Healthcare Sector, Challenges with use of AI in Healthcare.
Why in News?
Recently, Indian Council of Medical Research (ICMR) issued a guiding document- “The Ethical Guidelines for Application of AI in Biomedical Research and Health care”, which outlines 10 key patient-centric ethical principles for Artificial Intelligence (AI) application in the health sector.
- Diagnosis and screening, therapeutics, preventive treatments, clinical decision-making, public health surveillance, complex data analysis, predicting disease outcomes, behavioral and mental healthcare and health management systems are among the recognized applications of AI in healthcare.
What are the 10 Guiding Principles?
- Accountability and Liability Principle: It underlines the importance of regular internal and external audits to ensure optimum functioning of AI systems which must be made available to the public.
- Autonomy Principle: It ensures human oversight of the functioning and performance of the AI system. Before initiating any process, it is also critical to attain consent of the patient who must also be informed of the physical, psychological and social risks involved.
- Data Privacy Principle: It mandates AI-based technology should ensure privacy and personal data protection at all stages of development and deployment.
- Collaboration Principle: This principle encourages interdisciplinary, international collaboration and assistance involving different stakeholders.
- Safety and Risk Minimization Principle: This principle aimed at preventing “unintended or deliberate misuse”, anonymized data delinked from global technology to avoid cyber-attacks, and a favorable benefit-risk assessment by an ethical committee among a host of other areas.
- Accessibility, Equity and Inclusiveness Principle: This acknowledge that the deployment of AI technology assumes widespread availability of appropriate infrastructure and thus aims to bridge the digital divide.
- Data Optimization: Poor data quality, inappropriate and inadequate data representations may lead to biases, discrimination, errors and suboptimal functioning of the AI technology.
- Non-Discrimination and Fairness Principles: In order to refrain from biases and inaccuracies in the algorithms and ensure quality AI technologies should be designed for universal usage.
- Trustworthiness: In order to effectively use AI, clinicians and healthcare providers need to have a simple, systematic and trustworthy way to test the validity and reliability of AI technologies. In addition to providing accurate analysis of health data, a trustworthy AI-based solution should also be lawful, ethical, Reliable and valid.
Note: India has a host of frameworks which marry technological advances with healthcare. These include the Digital Health Authority for leveraging Digital health Technologies under the National Health Policy (2017), the Digital Information Security in Healthcare Act (DISHA) 2018 and the Medical Device Rules, 2017.
Conclusion:
AI cannot be held accountable for the decisions it makes, so an ethically sound policy framework is essential to guide the AI technologies development and its application in healthcare. Further, as AI technologies get further developed and applied in clinical decision making, it is important to have processes that discuss accountability in case of errors for safeguarding and protection.