Shaping Global AI Governance: Enhancements and Next Steps for the G7 Hiroshima AI Process

Available Downloads

Introduction

On May 2, 2024, Japanese prime minister Kishida Fumio announced the launch of the Hiroshima AI Process Friends Group at the Meeting of the Council at Ministerial Level (MCM) of the Organisation for Economic Co-operation and Development (OECD). This initiative, supported by 49 countries and regions, primarily consisting of OECD members, aims to advance cooperation for global access to safe, secure, and trustworthy generative artificial intelligence (AI). The group supported the implementation of international guidelines and codes of conduct as stipulated in the Hiroshima AI Process Comprehensive Policy Framework (Comprehensive Framework). Endorsed by the G7 Digital and Tech Ministers on December 1, 2023, the Comprehensive Framework was the first policy package the democratic leaders of the G7 have agreed upon to effectively steward the principles of human-centered AI design, safeguard individual rights, and enhance systems of trust. The framework sends a promising signal of international alignment on responsible development of AI—a momentum that only increases with the Hiroshima AI Process Friends Group support and involvement. Notably, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems (HCOC), established within the Comprehensive Framework, builds upon and aligns closely with existing policies in all G7 nations.

However, as the G7 has stated that the principles are living documents, there is vast potential yet to be realized, as well as remarkable questions lying ahead: How does the Hiroshima AI Process (HAIP) contribute to achieving interoperability of international rules on advanced AI models? How can it add value beyond other international collaborations on AI governance, such as the Bletchley Declaration by Countries Attending the AI Safety Summit? How can the G7, as a democratic referent, leverage its position as a leading advocate for responsible AI to encourage broader adoption of its governance principles, even in regions with differing political or cultural contexts?

To answer these questions, this report (1) provides a brief overview of the history of AI governance and relevant instances of international cooperation; (2) analyzes the structure and content of the HAIP, with specific focus on the HCOC; (3) examines how the HCOC fits into the international tapestry of AI governance, particularly within the context of G7 nations, and how it can foster regulatory interoperability on advanced AI systems; and (4) identifies and discusses prospective areas of focus for the future development of the HCOC.

AI Governance: A Historical Overview and International Initiatives


A Short History of AI Governance

Following the deep-learning breakthroughs of the early 2010s, AI adoption surged across a myriad of industries and sectors. This rapid integration process brought to light a multitude of potential risks. From fatal accidents involving autonomous vehicles to discriminatory hiring practices by AI algorithms, the real-world consequences of AI development have become increasingly evident. Furthermore, manipulation of financial markets by algorithmic trading and the spread of misinformation on social media platforms highlight the broader societal concerns surrounding AI.

Fueled by growing awareness of AI risks in the mid-2010s, national governments (including G7 members), international organizations, tech companies, and nonprofits launched a wave of policy and principle publications. Prominent examples include the European Union’s 2019 Ethics Guidelines for Trustworthy AI, the Recommendation of the Council on Artificial Intelligence by the OECD in 2019 (updated in 2024), and the Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in 2021. These publications emphasized pairing AI development with core values such as human rights, democracy, and sustainability as well as key principles including fairness, privacy, safety, security, transparency, and accountability.

While fundamental values and AI principles provide a crucial foundation, translating them into implementable standards for AI systems remains a challenge. Addressing this challenge requires concrete and material guidance. Various initiatives have been undertaken at different levels to bridge this gap. At the national level, examples include the AI Risk Management Framework (RMF) published by the National Institute of Standards and Technology (NIST) in the United States in January 2023 and Japan’s AI Guidelines for Business. On a supra-national scale, leading examples include the 2023 AI Safety Summit’s Emerging Processes for Frontier AI Safety and the G7’s HCOC—the focus of this report. Additionally, nongovernmental organizations such as the International Organization for Standardization (ISO) have contributed by issuing international standards on AI governance. The AI management system standard ISO/IEC 42001 was published in December 2023. The Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERAF), proposed by the Alan Turing Institute to the Council of Europe’s Ad Hoc Committee on Artificial Intelligence, is another significant example of AI risk management and stakeholder engagement. Collectively, these diverse approaches underscore the ongoing efforts to transform abstract AI principles into a practical and implementable reality.

Many published guidelines and principles for responsible AI development lack legally binding force, making them examples of “soft law.”

Despite this common direction, many published guidelines and principles for responsible AI development lack legally binding force, making them examples of “soft law.” While compliance with these documents helps companies with risk prevention strategies and forward-looking accountability measures, there are no guarantees or enforceability measures to ensure adherence to these standards. Thus, to advance stronger commitment to AI governance—in particular, addressing AI systems that pose high risks—there has been an active movement to introduce regulations with legally binding force. For instance, the European Commission introduced the draft AI Act in 2021 (subsequently approved by the European Parliament in March 2024), focusing most of its compliance requirements on high-risk systems and even banning certain systems when the risks they present are deemed unacceptable. Similarly, in 2022, Canada presented a legislative proposal, the Artificial Intelligence and Data Act (AIDA), which focuses on establishing compliance requirements for high-impact AI applications. The United States has also seen a surge in legislative activity targeted at AI, with nearly 80 draft bills introduced, over 30 of which specifically address risk mitigation in AI applications.

The 2023 boom in foundation models presents a new layer of complexity to the already challenging landscape of AI governance. While conventional AI has faced issues such as limited explainability, diverse stakeholders, and rapid evolution, foundation models expand the scope and reach of these challenges. The application of these systems in countless contexts and their ease of operation create an even more intricate risk environment. As a result, there has been a surge in global efforts to establish rules and foster international cooperation around advanced AI systems models. The EU AI Act, for example, recently incorporated provisions specifically related to “general purpose AI” systems. Japan’s Liberal Democratic Party proposed the concept note for the Basic Law for the Promotion of Responsible AI in February 2024, which targets advanced foundation AI models with significant societal impact. Similarly, the Chinese government implemented the Interim Measures for the Administration of Generative Artificial Intelligence Services in August 2023. Figure 1 shows the overall structure of AI governance and key documents related to each layer of governance.

Remote Visualization

The brief history of AI governance is characterized by a complex and multidimensional balancing act between innovation and regulation, rapidly advancing technology, and the integration of multivector interests—encompassing the technology industry, the general public, and regulators. While these groups may have differing priorities, there is also growing recognition of the need for collaboration. Responses to AI risks have evolved: nations and international bodies initially relied on soft-law principles and public-private collaborative efforts, whereas the current momentum is toward binding legislative action addressing AI. While the European Union’s AI Act and Canada’s proposed AIDA encompass regulations that span various industries, Japan, the United Kingdom, and the United States have indicated a policy direction that focuses on industry-specific AI regulations or focuses on powerful foundation models. Nonetheless, the regulatory emphasis in all of these instances is primarily on high-risk AI, aiming to strike an appropriate stability between fostering technological development and ensuring safety. G7 countries, in particular, find common ground in core principles such as human rights and democratic values, grounding their AI governance strategies in transparency, explainability, and bias prevention and forming a common unified for responsible AI development.

Advancing International Collaboration

As countries make progress with AI rulemaking within their borders, international cooperation is also advancing. The G7 is one of the most impactful forums for such international coordination. During the May 2023 summit, G7 leaders committed to establishing the HAIP by the end of the year to foster collaborative policy development on generative AI. Within six months, the G7 digital and tech ministers had delivered the Comprehensive Framework. This framework prioritizes proactive risk management and governance, transparency, and accountability across the AI life cycle. Additionally, it emphasizes anchoring AI development in human rights and democratic values while fostering the use of advanced AI for tackling global challenges such as climate change, healthcare, and education.

The Bletchley Declaration, emerging from the AI Safety Summit held in the United Kingdom in November 2023, stands as another significant milestone in international AI collaboration. The declaration addresses crucial aspects of AI governance, such as the protection of human rights, transparency, explainability, fairness, accountability, human oversight, bias mitigation, and privacy and data protection. Additionally, it highlights the risks associated with manipulating or generating deceptive content. Endorsed by 29 countries and regions, the signatories encompass not only G7 and OECD nations but also partners from the Middle East, Africa, South America, Asia, and, notably, China.

The United Nations is also active in forming an international AI governance body. In December 2023, the UN AI Advisory Body issued the interim report Governing AI for Humanity. The document addresses a comprehensive set of considerations and actions necessary for governing AI in a manner that benefits humanity as a whole. The report proposes a series of guiding principles and institutional functions aimed at establishing an international governance framework for AI. These include principles such as inclusivity, public interest, and the importance of aligning AI governance with data governance and promoting a data commons. Institutional functions highlighted in the report include assessing the future directions and implications of AI; developing and harmonizing standards, safety, and risk management frameworks; and facilitating the development, deployment, and use of AI for economic and societal benefit through international multistakeholder cooperation.

In March 2024, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence introduced the Draft Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law (AI Treaty), a groundbreaking treaty on AI governance that sets a high bar on responsible AI development. The AI Treaty emphasizes the obligation of signatory nations (parties to the convention) to proactively ensure AI activities are aligned with human rights, democratic integrity, and the rule of law. The treaty advocates for protective measures governing the AI life cycle—including accountability and transparency—and introduces comprehensive risk management frameworks. Furthermore, it calls for robust remedies and procedural safeguards against rights violations, promotes rigorous risk and impact assessments, and delineates duties for international cooperation and implementation, focusing on nondiscrimination and rights protection.

Nations participating in these initiatives vary. Figure 2 maps the structural involvement of various jurisdictions in the above-mentioned international processes.

Remote Visualization

Figure 2 shows why and how the G7 HAIP has significance in global rulemaking on advanced AI systems. First, the G7 nations participate in all significant initiatives mentioned previously—namely the AI Safety Summit, the UN AI Advisory Body, and the AI Treaty. Second, the G7 represents a group of nations with significant economic, regulatory, and technological impact and leadership. In 2023, the GDP of the G7 countries (excluding the European Union, which is a nonenumerated member) accounts for approximately 26.4 percent of the global total. Moreover, most global companies developing advanced AI systems are based in one of the G7 member countries. Establishing interoperable rules for advanced AI systems in these countries is crucial to avoid duplicate compliance costs and to facilitate innovation on a global scale. Third, the G7 is a group of democratic nations, unlike more inclusive bodies such as those in the United Nations or the AI Safety Summit. This shared commitment to democratic principles facilitates a focus on common values, equipping the HAIP to serve as a key foundation not just for safety, but also for realizing fundamental values such as human rights, democracy, and the rule of law in the development and implementation of advanced AI systems.

Analyzing the Hiroshima AI Process Comprehensive Framework


Structure of the Comprehensive Framework

In response to the rapid development and global spread of advanced AI, the G7 nations launched the HAIP in May 2023 under Japan’s presidency. This international forum aims to establish common ground for responsible AI development and use. It focuses on fostering safe, secure, and trustworthy AI by addressing key ethical issues, promoting collaboration on research and development, and encouraging international standards for a future where humanity benefits from AI advancements. Although the HAIP focuses on governance of advanced AI systems, the Comprehensive Framework avoids a rigid definition of this technology by providing open terminology in the form of “the most advanced AI systems, including the most advanced foundation models and generative AI systems.” This flexibility likely reflects a desire to adapt to future advancements in AI performance, functionalities, and deployment landscapes.

The Comprehensive Framework consists of four elements (see Figure 3). First, the OECD’s report G7 Hiroshima Process on Generative Artificial Intelligence serves as a background analysis of the opportunities and risks of advanced AI systems. Second, the Hiroshima Process International Guiding Principles for All AI Actors (HIGP) are 12 general principles for designing, developing, deploying, providing, and using advanced AI systems without providing detailed guidance. Third, the HCOC provides a set of detailed instructions for the developers of advanced AI systems including 11 of the 12 general principles the HIGP provides. Finally, the project-based cooperation on AI includes international collaborations in areas such as content authentication and the labeling of AI-generated content.

Remote Visualization

The following section summarizes the contents of the HIGP and HCOC.

Contents of the Hiroshima Process International Guiding Principles

The HIGP is a comprehensive set of values and best practices promoting responsible development and use of advanced AI on a global scale. It consists of 12 core principles that serve as a foundation for responsible AI governance. These principles closely mirror the values and approaches that G7 nations are already exploring or have drafted and implemented within their individual AI governance frameworks. The analysis here suggests that the 12 principles may be divided into the following three groups (see Table 1):

  1. Risk management and governance: recommended actions to assess and mitigate risks associated with AI systems, ensuring they are reduced to a level that relevant stakeholders deem acceptable
  2. Stakeholder engagement: recommended actions to ensure clear communication with and accountability to all relevant stakeholders
  3. Ethical and societal considerations: recommended actions to ensure the development, deployment, and usage of AI are in alignment with ethical standards and societal values
Remote Visualization

Overview of the Code of Conduct

Building on 11 of the HIGP’s 12 core principles (excluding the trustworthy and responsible use of advanced AI), the HCOC translates these principles and materializes them into a more specific code of practice for organizations developing and deploying advanced AI systems. The HCOC provides a comprehensive road map for AI processes and risk mitigation, outlining general actionable items on the matters of risk management and governance, stakeholder engagement, and ethical considerations.

Risk Management and Governance

The HCOC emphasizes in items 1, 2, 5, 6, 7, and 11 the importance of comprehensive risk management for organizations developing advanced AI across the life cycle of development and implementation. These practices include the following:

  • Risk identification and mitigation: implementing rigorous testing throughout the AI life cycle, such as red-teaming, to identify and address potential safety, security, and trustworthiness issues
  • Vulnerability and misuse management after deployment: postdeployment monitoring for vulnerabilities and misuse, with an emphasis on enabling third-party and user vulnerability reporting, possibly via bounty systems
  • Governance and risk management: creating transparency about organizations’ governance and risk management policies and regularly updating users on privacy and mitigation measures
  • Security investments: implementing robust security measures throughout the AI life cycle to protect critical system components against threats
  • Content authentication: developing content authentication methods (e.g., watermarking) to help users identify AI-generated content
  • Data quality, personal data, and intellectual property protection: prioritizing data integrity, addressing bias in AI, upholding privacy and respecting intellectual property, and encouraging alignment with relevant legal standards


Stakeholder Engagement

The HCOC highlights in items 3 and 4 the critical role of transparency and multistakeholder engagement:

  • Transparency and accountability: emphasizing public transparency for organizations developing advanced AI, including reporting on both the capabilities of AI systems and their limitations
  • Responsible information sharing: encouraging organizations to share information on potential risks, incidents, and best practices with each other, including industry, governments, academia, and the public


Ethical and Societal Considerations

The HCOC establishes in items 8, 9, and 10 a series of parameters to ensure AI is developed and deployed within the boundaries of human rights and democracy to address global challenges:

  • Research prioritization for societal safety: emphasizing collaborative research to advance AI safety, security, and trustworthiness, focusing on key risks such as upholding democratic values, respecting human rights, and protecting vulnerable groups
  • AI for global challenges: prioritizing development of advanced AI systems to address global challenges such as climate change, health, and education, aligning with the UN Sustainable Development Goals
  • International technical standards: encouraging contribution to the development and use of international technical standards, including practices to promote transparency by allowing users to identify AI-generated content (e.g., watermarking), testing methodologies, and cybersecurity policies


A detailed summary of the HCOC is presented in Table 2.

Remote Visualization

The Potential of the Hiroshima Code of Conduct: Toward Interoperable Frameworks for Advanced AI Systems

The HCOC, as articulated in the Comprehensive Framework, serves as a pivotal instrument to enhance interoperability between various AI governance frameworks. But how compatible is the HCOC with the regulatory frameworks of G7 members? What are the mechanisms or functionalities that make this interoperability possible? Firstly, the HCOC (and similar voluntary codes of conduct) operate as a potent, nonbinding “common guidance.” Although not legally enforceable, the gravitas and direction of the document wield significant practical influence. The document can shape compliance behaviors, either as good corporate governance standards or as forward-looking risk mitigation strategies in anticipation of further regulation; serve as a reference in private contracts; and even factor into civil or tort liability decisions. Secondly, the HCOC can be integrated into a jurisdiction’s regulatory framework. G7 nations are poised to either introduce new regulations or revise existing structures on AI governance. This opens a window to integrate the HCOC principles in new regulatory waves—an opportunity which is only enhanced by the international alignment represented by the Hiroshima AI Process Friends Group. If these regulations draw upon the HCOC—whether by reference, content consistency, or formal incorporation—this will facilitate and increase regulatory interoperability as well as international cohesion, integrating an AI governance framework that safeguards human rights, democracy, and the rule of law.

This section explores the space the HCOC holds within the G7 regulatory context and how it can foster interoperability between the regulations of different G7 jurisdictions on advanced AI systems. First, the section examines the current state of AI regulation within each G7 member state. This analysis assesses the compatibility between the HCOC principles and existing frameworks. Notably, a significant overlap already exists between the core elements of the G7 nations’ regulatory documents and the HCOC. Second, building on this compatibility, the section explores various avenues for integrating the HCOC into the regulatory frameworks of G7 member states. By exploring these options, the section identifies the most effective means of leveraging the HCOC to achieve interoperability in G7 AI governance.

Status of AI Governance in the G7 and HCOC as Common Guidance

The HCOC serves as a central reference point in the evolving global landscape of AI governance. This section provides insight into how the HCOC aligns with the existing frameworks in G7 jurisdictions, including Canada, the European Union, Japan, the United Kingdom, and the United States. Next, the section contains a brief overview of each jurisdiction’s regulatory status, identifies the documents that closely align with the HCOC’s structure and functionality, and evaluates their compatibility with the HCOC’s content. A summary of this analysis is shown in the annex.

The HCOC serves as a central reference point in the evolving global landscape of AI governance.

  1. Canada: Canada is in the process of formulating a comprehensive regulatory framework for AI under Bill C-27, known as AIDA. This legislation prioritizes risk mitigation for “high-impact” AI systems. Additionally, Canada has published a Voluntary Code of Conduct for Responsible Development and Management of Advanced Generative AI Systems, offering nonbinding guidelines for AI industry stakeholders.
  2. European Union: The European Union has been at the forefront of AI regulation with the AI Act, passed in March 2024. This legislation sets a robust and comprehensive framework for trustworthy AI development and implementation, emphasizing a risk-based regulatory approach. The AI Act mandates the development of codes of practice to guide its implementation, ensuring alignment with international standards as well as evolving technology and market trends.
  3. Japan: Japan’s approach to AI governance emphasizes maximizing the positive societal impacts of AI and capitalizing on a risk-based and agile governance model. Taking a sector-specific approach, Japan seeks to promote AI implementation through regulatory reforms tailored to specific industries and markets, such as transportation, finance, and medical devices. This strategy includes updating more than 10,000 regulations or ordinances that require “analog” compliance methods, including requirements for paper documents, on-site periodic inspections, and dedicated in-person staffing. In addition, Japan launched the AI Guidelines for Business as a voluntary AI risk management tool. The principles for advanced AI systems established in the HIGP are directly integrated into these guidelines, following Japan’s presidency of the G7 during the HAIP Comprehensive Framework drafting process.
  4. United Kingdom: The United Kingdom is developing a decentralized regulatory approach focusing on sector-specific guidelines, a pro-innovation stance, and public-private collaboration through specialized AI institutions. While the United Kingdom is not currently enforcing a comprehensive AI law or drafting a central code of conduct, it emphasizes traditional AI governance principles such as safety, security, transparency, and fairness to inform its sector-driven regulations. The UK Department for Science, Innovation and Technology also published a practical guidance code in the form of the Emerging Processes for Frontier AI Safety, ahead of the UK AI Safety Summit, where 28 nations and the European Union were signatories of the Bletchley Declaration.
  5. United States: The United States has adopted a decentralized, multitiered regulatory strategy for AI governance, with specialized agencies overseeing sector-specific regulations. Key initiatives include the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which directs agencies to formulate regulations addressing specific industries; the RMF, developed by NIST to provide guidelines for risk assessment and management; the White House’s Blueprint for an AI Bill of Rights, outlining foundational principles for AI development; and the White House’s nonbinding voluntary commitments for ensuring safe, secure, and trustworthy AI endorsed by companies such as Amazon, Anthropic, Google, Inflection, Meta, Microsoft, Nvidia, and OpenAI, among others.
     

Achieving and Enhancing Regulatory Interoperability: The HCOC as a Reference Point for AI Governance Development

Despite sharing common principles and core values, the AI governance landscape across the G7 is complex and multifaceted. The European Union has instituted robust and comprehensive regulations through its AI Act, and Canada is in the process of developing similar hard-law frameworks. Conversely, the United States, Japan, and the United Kingdom lean toward sector-specific and lighter-touch regulatory approaches. As such, the G7 nations exhibit a patchwork of regulations regarding AI use. This regulatory inconsistency creates challenges for global businesses, forcing them to navigate complex legal landscapes and varying rights and obligations across these key markets. The HCOC holds promise as a unifying mechanism, bridging these regulatory disparities and promoting interoperability.

The HCOC holds promise as a unifying mechanism, bridging these regulatory disparities and promoting interoperability.

The HCOC may be integrated into national regulations across G7 countries through various means, such as direct legal referencing or recognition, content integration, and leveraging or materially harmonizing specific aspects of regulatory developments. Pathways for integration into the regulatory frameworks of the G7 jurisdictions include the following:

  • Canada: Overall, Canada’s voluntary code of conduct specifically, and its regulatory trajectory generally, demonstrates alignment with the international conversation on ethical AI development and the HCOC’s principles. As AIDA evolves, it presents the potential to translate these principles into enforceable regulations, further solidifying Canada’s commitment to responsible AI advancement. As AIDA may include advanced AI systems within its regulatory scope, this upcoming law opens a clear possibility to find common ground with the HCOC’s principles and functionality.
  • European Union: The EU AI Act mandates the development of codes of practice to formalize its implementation. These codes of practice are poised to address practical aspects of responsible AI development, aligning with the HCOC’s focus. Furthermore, the European Union acknowledges the influence of international standards in shaping these codes of practice, presenting an opportunity to materially integrate or formally reference the HCOC in the EU AI governance framework.
  • Japan: In February 2024, the Liberal Democratic Party proposed the concept note for the Basic Law for the Promotion of Responsible AI. The proposed legislation specifically targets advanced foundational AI models with significant societal impact. It requires model developers to adhere to seven key measures, including third-party vulnerability checks and the disclosure of model specifications. The requirements align with the voluntary commitmentsS. companies have made to the White House. The HCOC could serve as a valuable reference point for implementation of these key measures, especially considering that the HCOC principles are already integrated into Japan’s AI Guidelines for Business.
  • United Kingdom: Besides leading international discussions on AI governance through initiatives such as the Bletchley Declaration, the United Kingdom is proactively formulating its own AI governance framework. According to A Pro-innovation Approach to AI Regulation, the UK government is undergoing technical policy analysis on regulation and life-cycle accountability of capable general purpose systems. The nation has also committed to updating the Emerging Processes for Frontier AI Safety, which is highly compatible with the HCOC, by the end of 2024. To build its AI regulatory and deployment capacity, the United Kingdom is opting for collaborative private-public development through institutions such as the Digital Regulation Cooperation Forum and the AI Safety Institute. Considering the current institutional inertia and the stalled progress regarding its AI in intellectual property code, the United Kingdom might leverage the HCOC and its international scope to inform potential regulatory initiatives.
  • United States: The United States is in a period of active development of its AI governance frameworks. The AI executive order has directed multiple agencies to deliver sector-specific guidance publications, and there are more than 80 draft bills addressing AI, with over 30 focused on risk mitigation. Notably, after releasing RMF 1.0 in January 2023, NIST established the Generative AI Public Work Group to spearhead development of a cross-sectoral AI RMF profile for managing the risks of generative AI models or systems. The HCOC’s emphasis on responsible risk management and governance aligns seamlessly with the United States’ principles-based trajectory and could fit into proposed risk mitigation legislation, positioning the HCOC as a crucial reference in shaping AI regulatory policy in the United States.
     

Hiroshima Code of Conduct 2.0: Next Steps toward a More Harmonized AI Governance Framework

The current landscape of AI governance is jurisdictionally fragmented, with national regulations creating a patchwork of requirements for developers of advanced AI systems and providing differing rights to its users. However, as this report has analyzed, the HCOC holds significant potential to enhance interoperability among the governance frameworks of G7 members. Further, it also serves as a model for the wider international community, namely the Hiroshima AI Process Friends Group. Nonetheless, the current HCOC lacks specificity to serve as useful material guidance. Future discussions among G7 leaders should focus on how the HCOC may be updated to ensure interoperability of rules for advanced AI systems across not only G7 countries but also the global community, serving as a benchmark for leveraging values such as human rights, democracy, and the rule of law. This section highlights key considerations to be addressed in future updates of the HCOC in alignment with its structure, including within (1) terminology and definitional interoperability, (2) risk management and governance, (3) stakeholder engagement, (4) ethical and societal considerations, and (5) further areas for exploration not contained within the current HCOC.

Future discussions among G7 leaders should focus on how the HCOC may be updated to ensure interoperability of rules for advanced AI systems across not only G7 countries but also the global community.

Terminology and Definitions: Indexing a Common Vocabulary

The HCOC can serve as a foundation for a consistent definition or methodology for identification of terms for advanced AI systems governance, facilitating smoother regulatory implementation across jurisdictions. Future terminology consensus includes the following:

  • Bridge the terminology gap: The HCOC can endorse consistent definitions for streamlined regulatory implementation across jurisdictions, fostering a common understanding of critical concepts. This could be achieved by including a glossary of key terms with clear, agreed-upon definitions or by establishing methodologies for identifying and classifying AI systems based on factors relevant to risk assessment. By establishing a common language, the HCOC can ease communication, regulatory certainty, and business-sector collaboration across borders. Underscoring the importance of a shared language around AI, the European Union and the United States are currently in the development of 65 key terms “essential to understanding risk-based approaches to AI.” Notably, even when common terminology has been developed (e.g., through the U.S.-EU Trade and Technology Council, OECD, or the ISO), the definition of advanced AI systems is unclear, leaving the question of which criteria (e.g., floating point operations, quality and size of data set, or input and output modalities) should be used to determine advanced AI systems—a gap across jurisdictions that can be bridged by the HCOC.
     

Risk Management and Governance: Building a Common and Robust Framework

Effective risk management stands as a cornerstone of responsible development of advanced AI systems. The HCOC can significantly contribute to this endeavor by advocating for shared principles and best practices. Risk management cohesion across jurisdictions includes the following:

  • Identify and share security risks, particularly systemic risks: The HCOC can enhance its interoperability contribution by explicitly listing and addressing security risks, particularly those with systemic consequences. This can be achieved through a two-pronged approach. First, the HCOC can integrate a comprehensive list of typical AI risks common to advanced AI systems, such as AI hallucinations (generating inaccurate outputs), fake content generation (deepfakes), intellectual property infringement (copyrighted content integration in data sets), job market transformations due to automation, the environmental impact of AI systems, bias amplification based on training data, and privacy concerns, among others. Case studies can be implemented through “project-based cooperation,” which constitutes the fourth element of the Comprehensive Framework. Second, the HCOC can establish a risk assessment framework to categorize AI systems based on their potential for harm. This framework could leverage existing models such as the EU AI Act’s categorization of “general purpose AI models with systemic risk.” By prioritizing systems with the greatest potential for systemic issues, the HCOC can provide a clearer road map for identifying, understanding, and mitigating various risks.
  • Enhance clarity in the risk management process: The HCOC can encourage the development of standardized risk management policies tailored to specific AI applications. Future drafting can reference or draw insights from established risk management frameworks, such as ISO/IEC 42001:2023 or NIST’s RMF—especially the RMF to be developed by the Generative AI Public Working Group, slated for public review in April 2024. Additionally, policies can incorporate learnings from other reputable sources to enhance clarity and comprehensiveness.
  • Develop standard data governance, risk management, and information security policies: Risk management, information security, and data governance standardization addressing the utilization, obtainment, and storage of data should receive clear and unified focus by the HCOC. The development of standardized policies can leverage established frameworks such as ISO/IEC 27001 and ISO/IEC 27002 or NIST’s Cybersecurity Framework, which provide a structured foundation adaptable to the unique risk landscape of the development of advanced AI systems.
  • Implement content authentication mechanisms: The HCOC can list reliable content authentication and provenance mechanisms to enable users to identify the originators of content or establish common labeling mechanisms to help users understand that AI has generated any specified content. These contributions could be based on input from the HAIP’s project-based cooperation. Authentication mechanisms can safeguard against misinformation and uphold democratic values and human rights by verifying data sources and outputs. However, it is imperative to balance these efforts with the protection of individual privacy, ensuring authentication processes do not compromise personal data. This balance is key to maintaining public trust and promoting the responsible use of AI technologies.
     

Stakeholder Engagement: Fostering Transparency and Accountability

Building trust in AI necessitates robust stakeholder engagement. A transparent and accountable AI development process fosters public confidence and encourages information sharing. Future pathways for stakeholder engagement include the following:

  • Establish standardized formats for transparency reports: The HCOC can promote adoption of standardized formats for transparency reports. By consolidating best practices and identifying common risks, the HCOC can offer a template for companies to self-assess and disclose relevant information consistently across jurisdictions. A potential model for standardized formatting pursuant to transparency reports is the UK Algorithmic Transparency Recording Standard. Standardization would enable companies to have uniform international disclosure criteria, enhancing cross-border cohesive reporting and auditing consistency as well as allowing the public to better understand the development and operation of AI systems.
  • Define clear formats for incident sharing: Encouraging adoption of clear incident-sharing formats can facilitate the exchange of information about security breaches, biases, or unintended consequences observed in deployed AI systems. This collaborative approach to sharing and learning from incidents enables stakeholders to develop effective mitigation strategies, ultimately enhancing the safety and reliability of AI technologies.
     

Ethical and Societal Considerations: Upholding the Rule of Law, Human Rights, and Core Democratic Values

The G7, a group of leading democracies, has a unique opportunity to shape the global conversation around responsible AI development. The HCOC, as an initiative stemming from this group, can play a crucial role in ensuring AI development aligns with the ethical and societal considerations that underpin democratic values and secure human rights in AI development and implementation. Potential pathways to prioritizing these principles include the following:

  • Reinforce the primacy of the rule of law, human rights, and democratic principles: The HCOC already champions these values and emphasizes human-centric design. However, there is room for further enhancement and substantiation for practical application. For instance, the HCOC could enhance its guidance on how organizations should foster research and AI development that prioritizes the protection of fairness, privacy, and intellectual property rights while also tackling global challenges such as climate change, health, and education. Rather than providing detailed descriptions itself, the HCOC could make reference to other international agreements or widely recognized standards. Furthermore, the HCOC could strengthen democratic principles and the rule of law by highlighting due safeguards for freedom of expression, ensuring AI does not minimize dissent or impose undue restrictions on information access, guaranteeing a right to remedy for individuals adversely affected by AI and promoting transparency and accountability in AI decisionmaking processes. Enhancing human-centricity could involve advocating for effective oversight in high-risk applications, providing individuals with explanations about AI-driven decisions affecting them, and promoting inclusive design that caters to the diverse needs and perspectives of various populations to ensure equitable AI benefits.
     

Further Areas for Exploration

The HCOC can play a key role in exploring several critical areas for further development in responsible AI:

  • Acknowledge special considerations for government use of AI: The HCOC can play a pivotal role in delineating special considerations for government use of AI, ensuring governmental powers in AI deployment are appropriately circumscribed and limited. Drawing inspiration from the AI Treaty and leveraging principles from the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities, the HCOC can establish clear guidelines that emphasize due process in developing and deploying advanced AI systems by the public sector, such as legal basis, legitimate aims, oversight, and redress, in addition to building upon and reinforcing shared foundational AI governance principles such as privacy, transparency, and accountability. By aligning with these values, the HCOC can become a democratic referent, and governments can leverage the power of AI responsibly while mitigating potential risks and fostering public trust.
  • Borrow best practices and harmonize regulatory approaches: The HCOC can explore the potential for incorporating best practices from various jurisdictions’ regulations. This could involve elements such as certification mechanisms, robust oversight mechanisms, and iterative audit controls.
  • Certification mechanisms: The HCOC can establish a framework for certification and registration mechanisms for high-risk advanced AI systems. This system could ensure rigorous evaluation throughout the life cycle of high-risk and advanced AI systems, from pre-market integrative assessments to ongoing post-market analyses and compliance reviews. The HCOC could define risk categories and establish criteria for the need for certification.
  • Oversight methodologies: The HCOC can emphasize the importance of effective oversight in AI systems to mitigate potential harm and address incidents effectively. In some cases, human involvement in critical AI processes is necessary, while in other cases machines can detect risks much faster and more precisely than humans. The HCOC could propose guidelines addressing whether to prioritize human judgment and intervention, especially in high-risk AI applications, ensuring a balance between the enhanced efficiency of automation, and the fairness and legitimacy of human oversight.
  • Audit mechanisms: The HCOC can extend procedural cohesion beyond AI implementation by establishing common processes for iterative audits, ensuring continuous monitoring and evaluation of AI systems’ compliance with established principles and guidelines. By considering and potentially adapting existing frameworks, such as the UK Guidance on the AI Auditing Framework, the HCOC can equip organizations with practical tools for ongoing evaluations. These iterative audits would allow for continuous improvement and ensure AI systems remain aligned with responsible development principles throughout their life cycle.
  • Establish means for redress: The HCOC could expand discussions about redress for harms caused by advanced AI systems. This could involve exploring access to remedies and explanations for individuals affected by AI decisions in areas as diverse as copyright and intellectual property to judicial processing. As AI plays a growing role in judicial decisionmaking, for example, developing specific appeal mechanisms for harms caused by AI content may become crucial. The HCOC could encourage developers and deployers of advanced AI systems to provide appropriate dispute resolution mechanisms to users and harmed parties. Furthermore, to make victim relief more effective, G7 members could discuss shifting the burden of proof of damages or causal links and establishing accessible, fast, and low-cost dispute resolution mechanisms for damages caused by advanced AI systems.
  • Foster shared responsibility in the AI ecosystem: The HCOC addresses developers of advanced AI systems only. However, its scope could expand in the future to other actors within the AI value chain, such as deployers and users of advanced AI systems. In addition, it is important to examine how to distribute responsibility and liability among stakeholders, ensuring all parties are accountable for their respective roles in potential harms.
     

By focusing on these key areas, the HCOC can evolve into a powerful tool for facilitating a more cohesive and effective approach to AI governance on a global scale. The HCOC’s dynamic nature positions it to bridge the gap between diverse national frameworks, fostering a future of responsible AI development for the G7 nations and beyond.

Conclusion

The G7 nations’ endorsement of the HIGP and the HCOC, supported by more than 40 countries through the Hiroshima AI Process Friends Group, marks a significant milestone in international AI governance. This agreement by leading democratic economies signifies a strong international commitment to fostering human-centered AI development that safeguards individual rights and bolsters trust in AI systems. The weight and influence of this international collaboration on the global stage imbues this agreement with particular impact and significance, holding the potential to shape the future of AI governance worldwide.

However, for the promise of the Comprehensive Framework to be fully realized, its key practical instrument, the HCOC, requires further development. While the HCOC, as this report reveals, significantly aligns with the trajectory of existing G7 policies, it currently lacks the material specificity to provide truly effective guidance for practical implementation. Moving forward, it is crucial to engage in substantive discussions on enhancing the HCOC in several key areas. These areas include the following:

  • Coordinating a common vocabulary: A unified understanding of key terms and definitions is essential for ensuring consistent interpretation of AI terms across borders.
  • Developing robust risk management frameworks and risk-based categorization: The HCOC should provide clear guidance on assessing and mitigating risks associated with advanced AI systems throughout the entire AI life cycle, from pre-market duties to post-market updates.
  • Promoting harmonized stakeholder engagement: The HCOC can play a valuable role in encouraging cohesive approaches to stakeholder engagement and developing consistent transparency standards.
  • Strengthening democratic and human rights principles: The HCOC should provide more concrete and actionable steps for upholding democratic values and safeguarding human rights in the context of AI development and deployment.
  • Pursuing further areas for discussion: The HCOC’s potential extends beyond its current scope. The G7 can leverage this collaborative document to explore critical areas such as developing special considerations for government AI use, harmonizing regulatory practices (e.g., certification mechanisms, oversight methodologies, and audit mechanisms), fostering shared responsibility within the AI ecosystem, and establishing redress mechanisms for AI harms.

A strengthened HCOC can serve as a valuable reference point not only for G7 nations but also for a broader international audience seeking to navigate the complexities of responsible AI development and deployment.

By addressing these crucial areas, the HCOC has the potential to evolve into a truly robust and impactful instrument for global AI governance. A strengthened HCOC can serve as a valuable reference point not only for G7 nations but also for a broader international audience seeking to navigate the complexities of responsible AI development and deployment. This international alignment can help ensure the power of AI is harnessed for the benefit of all while mitigating potential risks and upholding core human values.

Hiroki Habuka is a senior associate (non-resident) of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies (CSIS) in Washington, D.C, and a research professor at Kyoto University Graduate School of Law. David U. Socol de la Osa is an assistant professor at the Hitotsubashi Institute for Advanced Study and Graduate School of Law at Hitotsubashi University.

This report is made possible through generous support from Microsoft.

Remote Visualization

Image
Hiroki Habuka
Senior Associate (Non-resident), Wadhwani AI Center

David U. Socol de la Osa

Assistant Professor, Hitotsubashi Institute for Advanced Study and Graduate School of Law, Hitotsubashi University