India’s AI governance guidelines have won early praise for striking a careful balance — projecting the country as a responsible, inclusive leader that prefers voluntary guardrails over heavy-handed regulation. The approach resonates across the Global South. But once the applause fades, a more difficult question emerges: is India building the foundations to create global-scale AI, or merely optimising its deployment?
Why India’s AI governance tone matters
India’s framework consciously avoids the regulatory overreach seen elsewhere. It emphasises innovation, inclusion, and scalability, signalling that AI governance need not come at the cost of growth. This posture is especially attractive to emerging economies wary of compliance-heavy regimes that privilege incumbents.
Yet governance tone alone does not build capability. The real test lies in whether policy enables India to move up the AI value chain — from applications to foundational systems.
The digital public infrastructure advantage
India’s strongest asset is its digital public infrastructure (DPI). Platforms such as “”, “”, “”, “”, and “” provide developers with ready-made rails for identity, payments, authentication, language translation, and consent-based data sharing.
No other country offers such tightly integrated, population-scale infrastructure. As a result, India is one of the best places globally to deploy AI for citizen services, fintech, health delivery, and local-language applications.
The missing depth in AI creation
Despite this deployment strength, India remains thin on foundational AI creation. The first bottleneck is legal uncertainty. India’s Copyright Act has not been updated to clarify whether publicly available data can be used for AI training. There is no explicit text-and-data mining exception.
This ambiguity discourages full-scale model training. Many developers either fine-tune foreign open models or move compute and experimentation offshore. The result is technological dependence — applications built in India, but core intelligence owned elsewhere.
Liability uncertainty and risk aversion
A second constraint is unresolved liability. If an AI system causes harm, responsibility could lie with the model developer, the deploying enterprise, or the hosting platform. India’s current guidelines acknowledge the issue but postpone resolution.
This uncertainty matters. Smaller firms and start-ups, particularly in regulated sectors like finance and healthcare, are unlikely to invest in foundational AI when legal exposure is unclear. Innovation gravitates towards safer, downstream use cases.
Research talent without research infrastructure
India has deep academic talent in AI, but limited access to reliable, high-end compute. The “” programme is promising, but access remains opaque. Documentation is sparse, onboarding is slow, and approval-driven processes hinder rapid experimentation.
In contrast, countries racing ahead in AI are pairing talent with easily accessible public compute infrastructure, allowing universities and start-ups to train models without procedural friction.
What other countries are doing differently
Globally, AI leaders are not just regulating — they are enabling. The UAE has backed Falcon, an open-source language model, with visible state support. Singapore is shaping rules on explainability and user redress. The EU, despite stringent regulation, provides legal certainty and research carve-outs. The US continues to rely on a broad fair-use doctrine that gives developers room to experiment.
China’s DeepSeek demonstrated how quickly models can advance on cost and performance, but also how trust, openness, and governance determine global adoption. Capability without credibility does not scale.
The risk of long-term dependency
India’s current advantage lies in deployment. But the further one moves up the stack — into model training, risk calibration, and core system design — the thinner policy support becomes. Over time, this creates dependency on foreign models, foreign licensing terms, and foreign design choices.
This is not just a technology concern. It affects strategic autonomy, resilience, and competitiveness. Public services and citizen-facing systems increasingly rest on AI logic. If that logic is imported, so are its assumptions and constraints.
Targeted policy fixes, not sweeping laws
India does not need a grand new AI law to change course. What it needs are targeted interventions:
- Explicit legal clarity allowing use of publicly available data for AI training.
- Safe-harbour provisions for AI developers, similar to those for internet intermediaries.
- Transparent, low-friction access to AIRAWAT and shared compute infrastructure.
- Regulatory sandboxes for AI in finance, health, and governance.
- A lightweight certification regime for fairness, transparency, and robustness, linked to DPI use.
Each of these steps lowers risk without choking innovation.
A second chapter for India’s AI strategy
India’s AI governance framework reflects caution, balance, and inclusivity — all valuable traits. But the next phase must focus on creation, not just protection. India already has talent, data, market scale, and public infrastructure. What it lacks is policy clarity that converts these ingredients into sovereign capability.
The global South is watching closely. If India can demonstrate that inclusive governance and production-ready AI can coexist, it will not just lead by example — it will define a model others can adopt.
What to note for Prelims?
- Digital Public Infrastructure (DPI) and its components.
- AIRAWAT initiative.
- Difference between AI deployment and AI model creation.
- Text-and-data mining exceptions.
What to note for Mains?
- Trade-offs between light-touch AI regulation and capability building.
- Role of DPI in India’s AI strategy.
- Legal and liability challenges in AI governance.
- Risks of technological dependency in foundational AI.
- Policy measures to strengthen India’s AI sovereignty.
