G7 Leaders: Artificial intelligence (AI) is a really powerful and game-changing technology that we have today. It has the potential to make things better in many areas of life like health, education, security, and the economy. But, it also comes with some big challenges and risks. These include things like ethical issues, legal problems, social impacts, and security concerns. So, it’s really important that we develop and use AI in a way that we can trust and that is responsible and respectful of human values.
This is the main idea that the leaders of the Group of Seven (G7) countries talked about in their joint statement on Saturday, May 20, 2023. They had a meeting in Hiroshima, Japan and agreed that AI is a strategic technology that can help solve global problems and achieve the goals set by the United Nations for sustainable development. They also recognized that the rules and systems for governing AI haven’t kept up with how fast the technology is growing and changing.
The Need for International Technical Standards
The G7 leaders agreed that it’s important to create and use international technical standards for reliable AI. These standards should be based on a shared vision and goal, and should be in line with democratic values. They should also respect human rights, privacy, and autonomy. The leaders want these standards to ensure that AI systems are accurate, reliable, safe, and fair to everyone.
The G7 leaders are also worried about generative AI, a type of technology that can create realistic content like text, images, audio, and video. They believe it’s necessary to understand this emerging field and how it might affect society. They mentioned an app called ChatGPT, which uses generative AI to have realistic conversations with users. In March 2023, Elon Musk and a group of AI experts raised concerns about the risks associated with more advanced generative AI systems, including disinformation, manipulation, and cyberattacks.
To address these concerns, the G7 leaders announced that they will create a ministerial forum called the “Hiroshima AI Process” before the end of this year. They plan to discuss issues related to generative AI in this forum. They also want to collaborate with industry, academia, civil society, and international organizations to find solutions to these issues.
The Different Approaches and Perspectives
The G7 leaders’ statement shows that advanced economies have different opinions on how to regulate AI. The European Union (EU), which is a guest in the G7, has been taking the lead in proposing laws to regulate AI technology. In April 2023, the EU introduced its draft AI Act, which aims to create rules for trustworthy and people-focused AI in Europe. The EU’s AI Act would have strict rules for risky AI applications like facial recognition and biometric identification. It would also require transparency and accountability for all AI systems and have penalties for not following the rules.
On the other hand, the United States (US) has been more cautious and flexible in how it governs AI. President Joe Biden hasn’t announced a clear policy or strategy on AI regulation yet. He has mentioned that it’s still uncertain whether AI is dangerous or not. However, he supports investing in AI research and development to compete with China. Some US lawmakers and experts have called for more oversight and regulation of AI, particularly generative AI. Sam Altman, CEO of OpenAI (the organization behind ChatGPT), suggested that the US should consider licensing and testing requirements for AI model development.
Japan, which hosted this year’s G7 summit, has been more supportive of AI development and adoption. Japan has pledged to encourage the use of AI in public and industry while monitoring its risks. Prime Minister Fumio Kishida emphasized the importance of addressing both the potentials and risks of AI. Japan has also launched initiatives to promote international cooperation.
The Contrast with China’s Policy
The Western nations have different approaches to AI compared to China. China, which is not part of the G7, has a strong and aggressive plan to become a global leader in AI. They have invested a lot in AI research and development and have used AI in areas like surveillance, military, and education. However, China has been accused of using AI to violate human rights, like in the case of the Uyghur minority in Xinjiang.
In April 2023, China’s cyberspace regulator proposed new rules to control generative AI services and make sure they align with the country’s socialist values. These rules would require service providers to get licenses and do security assessments before launching generative AI services. They would also ban the use of generative AI to create illegal, harmful, or misleading content. Additionally, users would need to register with their real identities and agree to the use of generative AI services.
While the G7 leaders didn’t mention China directly, their statement implied that they would work together to counter China’s influence and challenge in AI. They talked about promoting a fair and open digital economy for AI development, supporting trustworthy AI in lower-income countries, and enhancing cooperation with other regions and countries.
Advancing Trustworthy AI: Key Outcomes of the G7 Summit
The G7 summit in Hiroshima was an important step towards developing a shared understanding and vision for reliable AI among the world’s leading democracies. The G7 leaders agreed that it is necessary to create and follow international technical rules for dependable AI. They also discussed the opportunities and challenges of generative AI. They acknowledged that different countries have different approaches to regulating AI, and these differ from China’s policy. To further discuss these matters and collaborate with others involved, they announced the creation of a ministerial forum.
The G7 summit also emphasized the importance and urgency of ensuring that AI is developed and used in a way that benefits humanity and society. As AI becomes more advanced and widespread, it is crucial to ensure that it follows ethical principles and values, respects human rights and dignity, and is transparent, accountable, reliable, and safe. It should not cause harm or discrimination. These are the key aspects of reliable AI that the G7 leaders agreed upon.
Trustworthy AI is not just a technical or regulatory matter; it is also political and social. Achieving it requires cooperation and dialogue among various groups, such as governments, businesses, universities, non-profit organizations, and international bodies. It also requires raising awareness and providing education to the public, as well as involving and empowering users. Trustworthy AI is a shared responsibility and a common goal that we should all work towards.