Current Affairs

General Studies Prelims

General Studies (Mains)

India’s AI Regulation Dilemma

India’s AI Regulation Dilemma

As Artificial Intelligence systems rapidly move from niche tools to mass consumer products, governments are grappling with a central question: how far should the state go in protecting citizens from AI-driven harms without stifling innovation? India’s current approach relies on adapting existing digital, financial and data protection laws. Recent moves by China, however, have exposed a regulatory gap in India’s framework — especially around consumer safety and psychological harms.

How India Currently Regulates AI

India does not yet have a standalone AI law. Instead, it regulates AI indirectly through a patchwork of existing regimes. The Ministry of Electronics and Information Technology () uses the IT Act and IT Rules to impose due diligence obligations on digital platforms, including curbs on deepfakes, fraud, and mandatory labelling of “synthetically generated” content.

In parallel, sectoral regulators have stepped in where risks are clearer. The Reserve Bank of India (Reserve Bank of India) has issued expectations on model risk management in credit and developed the FREE-AI framework to govern responsible AI use in finance. The Securities and Exchange Board of India (Securities and Exchange Board of India) has pushed regulated entities to clarify accountability when deploying AI tools. Privacy and data protection rules add another layer, focusing on data misuse rather than product safety.

What China’s Draft Rules Change the Conversation

China’s latest draft rules go further by articulating a consumer safety regime rooted in the state’s duty of care. These rules target emotionally interactive AI services and would require companies to warn users against excessive use and intervene when they detect signs of extreme emotional states.

The logic is that psychological dependence and emotional manipulation are harms that general content regulation does not capture. However, the approach is also intrusive: asking providers to identify users’ emotional states risks incentivising deeper, more intimate monitoring of individuals. This trade-off between protection and surveillance is at the heart of the debate.

India’s Incomplete but Less Intrusive Posture

Compared to China, India’s regulatory stance is less invasive but also less complete. By banking on existing laws, India regulates adjacent risks — fraud, misinformation, financial instability — without explicitly defining a duty of care for AI product safety. Psychological harms, addictive design, and emotional manipulation remain largely unaddressed in regulatory terms.

Moreover, MeitY’s interventions have been largely reactive, responding to visible harms like deepfakes rather than proactively shaping how AI systems should behave in sensitive contexts.

The Frontier Model Gap and the ‘Regulate First’ Risk

India hosts a vast ecosystem of AI adopters but remains far behind the U.S. and China in building frontier AI models. In this context, a “regulate first, build later” strategy carries risks. Overly prescriptive rules could entrench dependence on foreign models while domestic capability remains underdeveloped.

Unlike jurisdictions with strong frontier model capacity, India cannot assume that global technology trajectories will adjust to its regulatory preferences. This makes sequencing crucial: building capacity and governing use must proceed together.

Building Capability Without Paralysis

On the supply side, India’s priorities are relatively clear. Improving access to computational resources, upskilling the workforce, expanding public procurement of AI solutions, and translating research into industry are essential steps. Equally important is avoiding paralysis by consensus — prolonged, over-inclusive rulemaking that delays action and deepens dependence on external technologies.

Regulating Use Without Choking Innovation

On the demand side, India can regulate downstream deployment more assertively without micromanaging upstream model development. Instead of requiring companies to monitor users’ emotions, regulators could impose obligations on firms deploying AI in high-risk contexts — healthcare, finance, education, or mental health.

Such obligations could include incident reporting, risk assessments, and clear response protocols when AI systems behave unpredictably or cause harm. These can be layered onto existing consumer protection and privacy frameworks, preserving innovation while clarifying accountability.

What to Note for Prelims?

  • India has no standalone AI law; regulation is indirect and sectoral.
  • MeitY uses IT Rules to address deepfakes and synthetic content.
  • RBI’s FREE-AI framework governs AI use in finance.
  • China’s draft AI rules target emotionally interactive services.

What to Note for Mains?

  • Critically examine India’s reliance on existing laws for AI governance.
  • Discuss the trade-offs between consumer protection and surveillance in AI regulation.
  • Analyse the risks of regulating AI before building domestic frontier capability.
  • Suggest a balanced framework focusing on downstream accountability rather than upstream control.

The Strategic Choice Before India

India’s challenge is not to replicate China’s intrusive safeguards or the U.S.’s market-led minimalism, but to chart a path that reflects its capabilities and constraints. By strengthening domestic AI capacity while setting clear rules for high-risk use cases, India can govern how AI is used within its markets — without assuming that global technology will slow down or realign on its behalf.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archives