In recent news, the European Parliament achieved a preliminary agreement on a freshly drafted version of the Artificial Intelligence Act. The primary aim of this legislation is to regulate advanced systems like the ChatGPT from OpenAI. Initially drawn up in 2021, the legislation carries an objective to enhance transparency, instill trust, and bring accountability to AI systems. It also seeks to create an environment that can mitigate potential risks to the safety, health, Fundamental Rights, and democratic values of the European Union (EU).
What is the EU’s Artificial Intelligence Act?
The act defines AI as software generating outputs such as content, decisions, recommendations, and predictions. This regulation restricts the use of AI technologies categorized as high-risk, including real-time facial and biometric identification systems in public spaces, social scoring of citizens, subliminal techniques designed to influence behavior, and exploiting vulnerable people via technology.
The Act’s Focus and Objective
Emphasis is targeted at AI systems that could potentially harm people’s health, safety, or fundamental rights. Before high-risk AI systems hit the market, they must undergo strict reviews to ensure transparency, explainability, and human oversight. Systems with lower risks, like spam filters or video games, face fewer requirements. The regulation aims to balance the promotion of AI uptake while mitigating or preventing harms associated with certain uses of the technology.
The Global Importance of Regulating Artificial Intelligence
The usage of artificial intelligence is rapidly increasing with the advancement of technology. However, this growth brings along increased risks and uncertainties. Some AI tools, termed “black box,” are complex to the extent that even their creators cannot fully comprehend how they work. Problems have emerged, such as mistaken arrests due to Facial Recognition Software, unfair treatment due to biases built into AI systems, and more.
Existing Global AI Governance
In India, the National Institution for Transforming India (NITI) Aayog has issued guidelines on AI, including the National Strategy for Artificial Intelligence and Responsible AI for All report. The United Kingdom adopted a light-touch approach, urging different sector regulators to apply existing regulations to AI. The US released a Blueprint for an AI Bill of Rights (AIBoR), highlighting the harms of AI to economic and civil rights and outlining five principles for mitigating these harms. Meanwhile, China, in 2022, introduced some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
Looking Ahead: The Way Forward
Regulating artificial intelligence necessitates developing an understable regulatory framework that identifies the capabilities of AI susceptible to misuse. Policymakers should prioritize data privacy, integrity, and security while ensuring businesses can access necessary data. They should also enforce explainability to eliminate black-box approaches, which can promote transparency and enable businesses to comprehend the rationale behind every decision. Effective regulation needs a healthy balance between regulation scope and used vocabulary, while seeking input from various stakeholders, including industry experts and businesses.
Artificial Intelligence in the UPSC Civil Services Examination
In the UPSC Civil Services Examination, there were previous year questions related to the capabilities and limitations of artificial intelligence, indicating the importance of understanding this topic by aspiring civil service officers. Questions asked included the effective capabilities of AI, such as reducing electricity consumption in industrial units, creative story or song generation, disease diagnosis, text-to-speech conversion, and wireless transmission of electrical energy.