Governance
The Hiroshima AI Process for Global AI Governance
- 15 Jun 2023
- 8 min read
For Prelims: The Hiroshima AI Process, Global AI Governance, Generative AI, G-7, OECD, GPAI, IPR.
For Mains: The Hiroshima AI Process for Global AI Governance.
Why in News?
Recently, the annual G7 Summit held in Hiroshima, Japan, initiated the Hiroshima AI Process (HAP), which is likely to conclude by December 2023, signaling a significant step towards regulating Artificial Intelligence (AI).
- The G7 Leaders' Communiqué recognized the importance of inclusive AI governance and set forth a vision of trustworthy AI aligned with shared democratic values.
What is the Hiroshima AI Process?
- About:
- The HAP aims to facilitate international discussions on inclusive AI governance and interoperability to achieve a common vision and goal of trustworthy AI.
- It recognizes the growing prominence of Generative AI (GAI) across countries and sectors and emphasizes the need to address the opportunities and challenges associated with it.
- Working:
- The HAP will operate in cooperation with international organizations such as the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI).
- Objectives:
- The HAP aims to govern AI in a way that upholds Democratic values, ensures fairness and accountability, promotes transparency, and prioritizes the safety of AI technologies.
- It seeks to establish procedures that encourage openness, inclusivity, and fairness in AI-related discussions and decision-making processes.
What are the Potential Challenges and Outcomes?
- The HAP faces challenges due to differing approaches among G7 countries in regulating AI risks. However, it aims to facilitate a common understanding on important regulatory issues while preventing complete discord.
- By involving multiple stakeholders, the HAP strives to find a balanced approach to AI governance that considers diverse perspectives and maintains harmony among G7 countries.
- For now, there are three ways in which the HAP can play out,
- It may enable the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values.
- It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution.
- It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.
How can the HAP Resolve the issue of IPR in relation to GAI?
- Currently, there is ambiguity regarding the relationship between AI and IPR (Intellectual Property Rights), leading to conflicting interpretations and legal decisions in different jurisdictions.
- The HAP can contribute by establishing clear rules and principles regarding AI and IPR, helping the G7 countries reach a consensus on this matter.
- One specific area that can be addressed is the application of the "Fair Use" doctrine, which permits certain activities such as teaching, research, and criticism without seeking permission from the copyright owner.
- However, whether using copyrighted material in machine learning qualifies as fair use is a subject of debate.
- By developing a common guideline for G7 countries, the HAP can provide clarity on the permissible use of copyrighted materials in machine learning datasets as fair use, with certain conditions. Additionally, it can distinguish between the use of copyrighted materials for machine learning specifically and other AI-related uses.
- Such efforts can significantly impact the global discourse and practices surrounding the intersection of AI and intellectual property rights.
How is Global AI currently Governed?
- India:
- NITI Aayog, has issued some guiding documents on AI Issues such as the National Strategy for Artificial Intelligence and the Responsible AI for All report.
- Emphasises social and economic inclusion, innovation, and trustworthiness.
- US:
- The US released a Blueprint for an AI Bill of Rights (AIBoR) in 2022, outlining the harms of AI to economic and civil rights and lays down five principles for mitigating these harms.
- The Blueprint, instead of a horizontal approach like the EU, endorses a sectorally specific approach to AI governance, with policy interventions for individual sectors such as health, labour, and education, leaving it to sectoral federal agencies to come out with their plans.
- China:
- In 2022, China came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI.
- It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information.
- EU:
- In May 2023, the European Parliament reached a Preliminary Agreement on a new draft of the Artificial Intelligence Act, which aims to regulate systems like OpenAI's ChatGPT.
- The legislation was drafted in 2021 with the aim of bringing transparency, trust, and accountability to Al and creating a framework to mitigate risks to the safety, health, Fundamental Rights, and democratic values of the EU.
- In May 2023, the European Parliament reached a Preliminary Agreement on a new draft of the Artificial Intelligence Act, which aims to regulate systems like OpenAI's ChatGPT.
Way Forward
- Non-G7 countries also have the opportunity to launch similar processes to influence global AI governance. This shows that AI governance has become a global issue, with more complexity and debates expected in the future.
- In this context, the Indian government should take proactive steps by creating an open-source AI risk profile, setting up controlled research environments for testing high-risk AI models, promoting explainable AI, defining intervention scenarios, and maintaining vigilance.
- It is important to establish a simple regulatory framework that defines AI capabilities and identifies areas prone to misuse. Prioritizing data privacy, integrity, and security while ensuring data access for businesses is crucial.
- Enforcing mandatory explainability in AI systems will enhance transparency and help businesses understand the reasoning behind decisions.
- Policymakers should strive to strike a balance between the scope of regulation and the language used, seeking input from various stakeholders, including industry experts and businesses. This way forward will contribute to effective AI regulations that address concerns and promote responsible AI deployment.
UPSC Civil Services Examination, Previous Year Questions (PYQs)
Q. With the present state of development, Artificial Intelligence can effectively do which of the following? (2020)
- Bring down electricity consumption in industrial units
- Create meaningful short stories and songs
- Disease diagnosis
- Text-to-Speech Conversion
- Wireless transmission of electrical energy
Select the correct answer using the code given below:
(a) 1, 2, 3 and 5 only
(b) 1, 3 and 4 only
(c) 2, 4 and 5 only
(d) 1, 2, 3, 4 and 5
Ans: (b)