The Ministry of Electronics and Information Technology (MeitY) in India released new governance guidelines for Artificial Intelligence (AI) in 2025. These guidelines aim to balance innovation with accountability and safety. India seeks to promote AI growth while managing risks specific to the country’s needs. The guidelines come ahead of the India–AI Impact Summit 2026, the first global AI summit held in the Global South.
Key Principles and Approach
The guidelines emphasise the principle Do No Harm. They promote innovation sandboxes and adaptive risk mitigation. India plans an agile, sector-specific regulatory approach. Existing laws like the IT Act and Digital Personal Data Protection Act will be updated rather than replaced. The government rejects the need for an immediate standalone AI law. Instead, it focuses on targeted amendments and voluntary frameworks.
Infrastructure and Capacity Building
Expanding access to data and computing resources is vital. Subsidised GPUs and India-specific datasets will be made available through platforms like AIKosh. Integration of AI with Digital Public Infrastructure such as Aadhaar and Unified Payments Interface is encouraged. The government plans incentives like tax rebates and AI-linked loans to support MSMEs. AI literacy and training will be scaled up for citizens, public servants, and law enforcement to build technical capacity nationwide.
Risk Mitigation and Accountability
An India-specific risk assessment framework will address local realities. Techno-legal measures such as embedding privacy and fairness rules into system design are recommended. A graded liability regime will assign responsibility based on function and risk levels. Organisations must implement grievance redressal systems, transparency reporting, and self-certification mechanisms. These steps aim to ensure AI benefits while managing harms.
Content Authentication and Legal Amendments
The guidelines show the need for content authentication to combat deepfakes and synthetic media. Proposed amendments to IT Rules require social media platforms to label AI-generated content clearly. Platforms must verify user declarations about synthetic content using automated tools. Failure to comply may result in loss of legal immunity for third-party content. This aims to curb misinformation and protect users.
Institutional Framework
A whole-of-government approach will govern AI. The AI Governance Group (AIGG) will lead policy coordination. It will be supported by the Technology & Policy Expert Committee (TPEC) and the AI Safety Institute (AISI). These bodies will oversee implementation, monitor risks, and advise on emerging challenges.
Concerns Over Official Use of AI
Internal debates focus on risks when government officials use AI tools. Issues include data privacy, inference risks, and strategic information exposure. Questions arise about AI analysing sensitive queries from bureaucrats or policymakers. There are concerns about dependence on foreign AI platforms and protecting official systems from external data exploitation.
Guideline Development Process
The guidelines were drafted by a high-level committee chaired by Prof. Balaraman Ravindran of IIT Madras. The process included extensive consultations and public feedback. A second committee refined the report to keep pace with fast-evolving AI technologies.
Questions for UPSC:
- Critically discuss the challenges and opportunities in regulating emerging technologies like Artificial Intelligence in India.
- Analyse the role of Digital Public Infrastructure in promoting inclusive technological growth and its impact on socio-economic development.
- Examine the ethical and legal implications of synthetic media proliferation and the measures to counter misinformation in digital platforms.
- Estimate the risks and benefits of government use of Artificial Intelligence systems in public administration, and suggest safeguards to protect data privacy and national security.
Answer Hints:
1. Critically discuss the challenges and opportunities in regulating emerging technologies like Artificial Intelligence in India.
- Balancing innovation with accountability – encouraging growth while managing risks.
- Use of existing laws (IT Act, Digital Personal Data Protection Act) with targeted amendments instead of standalone AI law.
- Sector-specific, agile regulation to adapt to diverse AI applications and rapid tech evolution.
- Challenges include content authentication, deepfakes, data privacy, and liability assignment.
- Opportunities in promoting AI-driven economic growth, MSME adoption, and global leadership (India–AI Impact Summit 2026).
- Inclusion of voluntary frameworks, techno-legal measures, and risk assessment tailored to local context.
2. Analyse the role of Digital Public Infrastructure in promoting inclusive technological growth and its impact on socio-economic development.
- Integration of AI with DPI like Aadhaar and UPI enhances accessibility and efficiency.
- DPI provides foundational platforms enabling MSMEs and citizens to leverage AI tools.
- Subsidised compute resources and India-specific datasets via platforms like AIKosh support innovation.
- Incentives such as tax rebates and AI-linked loans encourage private sector adoption.
- Capacity building and AI literacy programs ensure wider participation across urban and rural areas.
- Overall, DPI drives inclusive growth by bridging digital divides and boosting economic participation.
3. Examine the ethical and legal implications of synthetic media proliferation and the measures to counter misinformation in digital platforms.
- Synthetic media (deepfakes, AI-generated content) risks misinformation, erosion of trust, and manipulation.
- Ethical concerns include privacy violations, consent, and potential harm to individuals and society.
- Legal measures proposed include mandatory AI content labelling on social media platforms.
- Platforms must verify user declarations with automated tools and display clear labels on synthetic content.
- Failure to comply risks losing legal immunity for third-party content, increasing platform accountability.
- International cooperation and techno-legal frameworks are crucial for effective mitigation.
4. Estimate the risks and benefits of government use of Artificial Intelligence systems in public administration, and suggest safeguards to protect data privacy and national security.
- Benefits – improved efficiency, data-driven policymaking, optimized resource allocation (e.g., CCTV management).
- Risks – data privacy breaches, inference of sensitive strategic information, and potential misuse of official data.
- Concerns over reliance on foreign AI platforms risking data sovereignty and exposure of confidential information.
- Safeguards include strict data governance, anonymization, access controls, and use of indigenous AI solutions.
- Implementation of risk assessment frameworks and continuous monitoring to detect misuse or leaks.
- Development of internal AI policies and training for officials to ensure responsible use.
