Artificial Intelligence (AI) remains the most transformative technology shaping the world in 2025. Its rapid growth offers vast benefits but also presents risks. Governments and organisations worldwide are grappling with how to regulate AI effectively to balance innovation and safety.
Recent Developments in AI Risks
AI systems have caused serious harms, from financial market instability to physical violence. Automated trading algorithms triggered a flash crash in 2010, wiping out $1 trillion in minutes. Autonomous drones powered by AI reportedly engaged in lethal attacks without human oversight. AI-driven targeting in conflict zones has raised ethical and legal concerns due to civilian casualties. Biases embedded in AI systems perpetuate social inequalities, as seen in judicial risk assessments and hiring tools discriminating against minorities and women.
Impact of AI Bias and Corporate Control
AI often inherits biases from its creators and data, amplifying prejudice under a guise of objectivity. Examples include unfair sentencing algorithms in US courts and discriminatory credit card limits. A few large US tech companies dominate AI resources, creating geopolitical risks and widening the digital divide. Facebook’s algorithm played a role in inciting violence against the Rohingya minority by promoting hateful content for engagement and profit, denoting the ethical pitfalls of profit-driven AI.
EU Artificial Intelligence Act
The EU leads global AI regulation with its Artificial Intelligence Act, implemented progressively from 2024 to 2026. It adopts a risk-based approach, banning AI applications deemed unacceptable, such as manipulative social scoring and biometric surveillance. High-risk AI systems require continuous monitoring and human oversight. Generative AI like ChatGPT must disclose AI-generated content and prevent harmful outputs. The Act also mandates transparency and non-discrimination, influencing global AI governance beyond Europe. The European Artificial Intelligence Board oversees consistent enforcement.
Global Regulatory Responses
The US favours a lighter regulatory touch focusing on safety and standards. The Trump administration reversed previous AI policies in 2025 and criticised the EU Act as harmful to innovation and trade. China revealed a Global AI Governance Action Plan promoting international cooperation but with opaque implementation. Brazil is preparing its own risk-based AI law. India lacks a dedicated AI law but has issued strategic frameworks and launched initiatives like the IndiaAI Safety Institute. The Government of India released a draft AI Regulation Bill in 2025, mirroring EU principles but aiming for clearer accountability and compensation mechanisms.
Challenges and Future Directions
Current AI regulations face criticism for stifling innovation or lacking enforcement clarity. The EU Act’s AI Liability Directive was withdrawn, leaving gaps in victim compensation. India has an opportunity to create a more balanced system with clear liability rules, context-driven risk assessments, and a national AI governance body. Aligning copyright and data use laws with AI regulation is essential. Global cooperation remains crucial to address AI’s transnational impacts while respecting national priorities.
Questions for UPSC:
- Critically analyse the risks and benefits of Artificial Intelligence in modern society with suitable examples.
- Explain the significance of the European Union Artificial Intelligence Act and its impact on global AI governance.
- What are the challenges in regulating emerging technologies like AI? How can India balance innovation and ethical concerns in its AI policies?
- Underline the role of corporate control in shaping AI development and discuss its geopolitical implications with reference to digital divides.
