Artificial Intelligence (AI) has found significant applications in various sectors, including healthcare. More recently, the Indian Council of Medical Research (ICMR) published a document titled “The Ethical Guidelines for Application of AI in Biomedical Research and Health care”. This guideline outlines 10 fundamental patient-centric ethical principles for the application of AI in the health sector. These include diagnosis and screening, therapeutics, preventive treatments, and more.
Principle 1: Accountability and Liability
Accountability and liability are emphasized in the management of AI systems in healthcare. Regular internal and external audits of these systems are critical to ensure they function optimally. Furthermore, these audit reports should be made publicly available.
Principle 2: Autonomy
The autonomy principle ensures human oversight on AI systems’ functioning and performance. Besides, obtaining informed consent from the patient before initiating any process is vital. Here, patients must also be informed about potential physical, psychological, and social risks.
Principle 3: Data Privacy
Data privacy is another key consideration. AI technologies should ensure privacy and protection of personal data at every stage of development and deployment.
Principle 4: Collaboration
The collaboration principle encourages interdisciplinary and international assistance involving different stakeholders, fostering a collaborative environment in AI’s use in healthcare.
Principle 5: Safety and Risk Minimization
Safety and risk minimization are aimed at preventing “unintended or deliberate misuse” of AI. This principle includes the use of anonymized data to avoid cyber-attacks and a favorable benefit-risk assessment by an ethical committee.
Principle 6: Accessibility, Equity, and Inclusiveness
The principle of accessibility, equity, and inclusiveness acknowledges that AI technology’s deployment assumes widespread availability of suitable infrastructure and thus aims to bridge the digital divide.
Principle 7: Data Optimization
The effectiveness of AI technology can be hampered by poor data quality, inappropriate and inadequate data representations that may lead to biases, discrimination, and errors.
Principle 8: Non-Discrimination and Fairness
AI technologies should be designed for universal usage to avoid biases and inaccuracies in the algorithms and ensure quality AI solutions.
Principle 9: Trustworthiness
Clinicians and healthcare providers need a simple, systematic, and trustworthy way to test the validity and reliability of AI technologies. A trustworthy AI-based solution should also be lawful, ethical, reliable, and valid, besides providing accurate health data analysis.
India’s Frameworks for Technology and Healthcare
India has multiple frameworks facilitating the integration of technological advances with healthcare. These include the Digital Health Authority, the Digital Information Security in Healthcare Act (DISHA) 2018, and the Medical Device Rules, 2017.
The Importance of an Ethically Sound Policy Framework
An ethically sound policy framework is crucial for guiding the development and application of AI technologies in healthcare considering AI cannot be held accountable for its decisions. Furthermore, it is imperative to have processes discussing accountability in case of errors for safeguarding and protection as AI technologies continue to advance and become integrated in clinical decision making.