Artificial Intelligence (AI) has increasingly become a central topic for global governance discussions. A notable development in this realm is the implementation of the Hiroshima AI Process (HAP), initiated at the recent G7 Summit held in Hiroshima, Japan. Slated for completion in December 2023, the HAP represents a significant stride towards regulating AI and establishing an inclusive and democratic AI governance. The G7 Leaders’ Communiqué underscored the importance of trustworthy AI that aligns with shared democratic principles.
About the Hiroshima AI Process
The HAP’s primary objective is to foster international dialogues on inclusive AI governance and interoperability. This initiative is designed to arrive at a unified vision and goal of trustworthy AI. The increasing prominence of Generative AI (GAI) across countries and sectors is recognized by the HAP, which stresses the need to explore the opportunities and challenges that come with it.
In its operation, the HAP collaborates with global organizations such as the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). The HAP’s objective is to govern AI in a way that upholds Democratic values, ensures fairness and accountability, promotes transparency, and prioritizes safety. It aims to establish procedures that encourage open, inclusive, and fair discussions and decision-making processes in AI.
Potential Challenges and Outcomes of the Hiroshima AI Process
The HAP faces hurdles due to different approaches adopted by G7 countries in regulating AI risks. Its aim is to build a common understanding on crucial regulatory issues, even while averting utter discord. By engaging multiple stakeholders, the HAP endeavors to formulate a balanced approach to AI governance, taking into account diverse perspectives and maintaining harmony among G7 nations.
The HAP’s future could pan out in three ways: the G7 countries could move towards divergent regulation based on shared norms, principles, and values; it could succumb to differing views among the G7 nations, leading to a lack of meaningful solutions; or it could deliver a mixed outcome by finding solutions for some issues but falling short on others.
Resolving Intellectual Property Rights Issues with GAI Through the Hiroshima AI Process
Presently, the relationship between AI and Intellectual Property Rights (IPR) is nebulous, leading to conflicting interpretations and legal decisions across jurisdictions. The HAP could help establish clear rules and principles regarding AI and IPR, assisting the G7 countries in reaching a consensus. A prominent issue that could be addressed is the application of the “Fair Use” doctrine, which allows certain activities such as teaching, research, and criticism without requiring permission from the copyright owner.
Current Global AI Governance
AI governance varies greatly across different countries. In India, policy documents like the National Strategy for Artificial Intelligence and Responsible AI for All report have been issued, focusing on social and economic inclusion, innovation, and trustworthiness. The US released a Blueprint for an AI Bill of Rights in 2022, outlining the harms of AI and mitigation principles. China has introduced some of the world’s first nationally binding regulations targeting specific types of algorithms and AI in 2022. The EU reached a preliminary agreement on a new draft of the Artificial Intelligence Act in May 2023, with the aim of bringing transparency, trust, and accountability to AI.
The Way Forward
Non-G7 nations also have opportunities to initiate similar processes and influence global AI governance. Proactive steps should be made by governments to create open-source AI risk profiles, set up controlled research environments for testing high-risk AI models, promote explainable AI, define intervention scenarios, and maintain vigilance. It is crucial to establish a simple regulatory framework that defines AI’s capabilities, identifies potential areas for misuse, ensuring data privacy, integrity, and security while guaranteeing data access for businesses. Policymakers should strive to balance the scope of regulation with understandable language, seeking input from various stakeholders. This approach will contribute to effective AI regulations promoting responsible AI deployment.