The Group of Seven (G7) countries are close to agreeing a code of conduct for artificial intelligence (AI) developers. The code comprises 11 principles designed to promote the worldwide use of safe, secure and trustworthy AI. It addresses the potential risks associated with AI and solutions to remedy the issues. It provides voluntary guidance for organizations working on advanced AI systems.
Code of conduct objectives
The code proposes that companies publicly disclose information about the capabilities, limitations, uses and misuses of their AI systems. This year’s G7 meeting took place in Hiroshima, Japan, in April. It addressed topics such as emerging technologies, digital infrastructure and responsible AI governance.
The European Union has set up guidelines with its European Act on AI. The first draft was adopted in June. In addition, OpenAI has announced its intention to create a team dedicated to assessing the various risks associated with AI.
Main points of the code of conduct
The code of conduct comprises 11 key points:
- Security: developers need to ensure that their AI systems are safe and protected against attacks, errors and other security issues.
- Transparency: companies should be transparent about the technologies they use. The same goes for internal data and processes.
- Ethics: the use of AI must comply with strict ethical standards. There is no question of violating human rights.
- Impartiality: AI designers must ensure that their systems are fair and impartial, avoiding possible bias and discrimination.
- Human control: users must retain a degree of control over the decisions made by AI systems. They must be able to intervene if necessary.
- Responsibility: developers and companies must take responsibility for the consequences of using their AI systems.
Roles of the players involved
Within the framework of the code of conduct, the players involved have a specific role to play:
- Governments: they must put in place appropriate regulations to oversee the use of AI. The aim is to enable responsible development.
- Companies: they are invited to implement the recommendations of the code of conduct. They make a public commitment to respect ethical principles.
- Developers: they must design AI systems aligned with the code of conduct guidelines, ensuring in particular that they are safe, transparent and fair.
The importance of this code for the future of AI
This code represents a crucial step in the global governance of AI. It is vital to agree on common standards to guide the development and implementation of this technology. The fact that the G7 is addressing this issue shows that developed countries are taking the challenges posed by AI seriously. They are ready to cooperate to ensure its sustainable and responsible development.
The limits of the code of conduct
Despite its strengths, it should be stressed that this code of conduct has certain limitations:
- Its voluntary nature: as things stand, companies are not obliged to follow the code’s recommendations.
- Lack of control: without an independent supervisory body to check that the code is being implemented, companies may tend to prioritize their economic interests over ethical principles.
Conclusion
The G7 code of conduct is a signal in favor of ethical and responsible AI. It is essential to continue strengthening cooperation between the countries and players involved. Indeed, the objective is the effective implementation of these principles in the artificial intelligence industry.