Japan’s Agile AI Governance in Action: Fostering a Global Nexus Through Pluralistic Interoperability
Photo: The Little Hut/Adobe Stock
Available Downloads
Introduction
The year 2025 has marked a major transformation in global trends concerning artificial intelligence (AI) risks and regulations. Beginning with the “DeepSeek shock” out of China in January and the AI Action Summit held in Paris in February, there has been a growing recognition that AI is an agenda item not only for product safety but also for national security and economic competitiveness. In the United States, America’s AI Action Plan of July declared the establishment of “unquestioned and unchallenged global technological dominance” in AI as a paramount national security imperative, emphasizing the easing of regulations that could pose barriers to achieving this goal. Meanwhile, the European Union, in its AI Continent Action Plan, stated the need to shape the future of AI in a way that “enhances [its] competitiveness,” clarifying its policy to promote the smooth application and simplification of its comprehensive AI Act.
Amid these dynamic shifts in global AI policy, Japan’s position has remained consistent and stable. Under the slogan to become the “world’s most AI-friendly country,” Japan’s approach is to implement policies that provide maximum support for the utilization of AI, based on its existing legal framework, while incorporating an agile and multistakeholder process. This approach was given concrete form with the passage of the Act on Promotion of Research, Development, and Utilization of AI-Related Technology (hereinafter, the AI Promotion Act) in May 2025, which came into full effect on September 1. Furthermore, regulatory reform is being advanced at a rapid pace in many areas to ensure that existing rules do not become a burden to the development and implementation of AI. This report will introduce the recent policies in Japan’s AI governance policy, compare them with the latest trends in the United States and Europe, and explore the potential for Japan to assume the role of a trusted nexus in the often-fragmented formation of global AI governance.
The Foundation of Japan’s AI Strategy: The AI Promotion Act
1. Basic Principles of the AI Promotion Act
The AI Promotion Act, which was established in May 2025 and fully came into effect in September 2025, is a law designed to support the development and use of AI, not to restrict it. In light of AI’s importance as a technology from the perspectives of both economic development and national security, the act establishes four basic principles. It stipulates that Japan shall: (1) enhance its AI research and development capabilities and international competitiveness; (2) comprehensively and systematically promote initiatives by stakeholders at every stage, from research and development (R&D) to social implementation; (3) implement measures such as ensuring transparency regarding the risks posed by AI; and (4) play a leading role in international cooperation.
2. Structure of the AI Promotion Act
The structure of the AI Promotion Act is best understood as a framework for circulating a Plan-Do-Check-Act cycle throughout society to achieve the basic principles outlined above. This approach differs markedly in nature from comprehensive regulations targeting businesses, such as the EU AI Act.
- Plan: The AI Strategic Headquarters (established September 1), headed by the prime minister, will propose the AI Basic Plan, which will then be finalized by a Cabinet decision (expected within 2025).
Do (Measures): In accordance with the basic principles, various actors have defined responsibilities.
- Research and development institutions are responsible for promoting and disseminating AI research and developing human resources.
- Businesses are responsible for improving business efficiency and creating new industries by utilizing AI.
- Citizens are responsible for fostering understanding of and interest in AI.
The act also stipulates that these private-sector entities should cooperate with measures implemented by national and local governments. However, these are legally nonbinding “duties to endeavor” or requests; no penalties are imposed for noncompliance. Nevertheless, if actions result in a violation of existing laws—such as the Act on the Protection of Personal Information, the Copyright Act, or various safety regulations—penalties will be imposed under those respective laws.
On the other hand, the responsibilities of the national and local governments are defined as formulating and implementing the necessary measures to promote AI research, development, and utilization. National government measures are to be carried out in collaboration with multistakeholders and include legislative or financial actions to advance R&D, develop and promote the shared use of data centers and datasets, establish guidelines in line with international norms, and secure human resources and promote education. The act also declares that the national government itself will actively utilize AI to improve the efficiency of its administrative affairs.
- Check (Evaluation): The national government will: (1) gather information on domestic and international trends in AI R&D and utilization; (2) analyze cases where rights and interests have been infringed upon by fraudulent or inappropriate AI R&D and utilization; and (3) consider risk countermeasures based on these analyses. In essence, the government is granted the authority to investigate the latest trends in AI and related risk cases and countermeasures. For investigations into advanced AI models, the reporting framework from the G7 Hiroshima AI Process could potentially be utilized.
- Act (Improvement): Based on the results of the aforementioned investigations, the national government may amend newly established laws, guidelines, the AI Basic Plan, or other policy measures. Furthermore, the act empowers the government to provide guidance, advice, information, or other necessary measures for businesses based on investigation findings. When advising companies, the government will avoid placing heavy burdens on them or demanding too much information, while always respecting their trade secrets and intellectual property.
- The AI Promotion Act also stipulates that Japan shall actively participate in international cooperation and the formulation of norms related to the promotion of AI. This anticipates contributions through various international forums, including the G7 Hiroshima AI Process, as well as the Organisation for Economic Co-operation and Development (OECD) and the network of AI safety institutes (AISIs).
3. Implementation Status and Future Schedule
As described above, the AI Promotion Act establishes only the basic principles and a general framework; concrete measures will be formulated in the future.
The act came into full effect on September 1, 2025. As a next step, the AI Basic Plan is scheduled to be formulated within 2025.
Furthermore, guidelines concerning the R&D and utilization of AI are also slated for publication in 2025, though their specific content has not yet been revealed. It is worth noting that guidance for business operators on risk management already exists in the form of the AI Guidelines for Business, and guidance on contracts between businesses is available in the Checklist for Contracts on the Use and Development of AI.
Removing Barriers to Innovation: Japan’s Forefront of Regulatory Reform
As described above, Japan is taking an approach of comprehensively promoting AI while addressing specific, concrete risks by interpreting and applying existing laws and regulations and updating them, as necessary. While the specifics of AI risks and current laws were covered in a previous CSIS report, this section will explain the most recent significant developments.
The government has indicated a policy of conducting strict reviews to ensure that future policymaking does not hinder the utilization of AI, digital, and other technologies.
1. Elimination of “Analog Regulations”
Many conventional regulations contained clauses that mandated human-based compliance to achieve a certain level of safety, such as mandatory visual inspections, periodic inspection and maintenance obligations, and mandatory on-site presence. The Japanese government has targeted these “analog regulations” for comprehensive reform since 2021, viewing them as impediments to the implementation of digital technologies and the broader digitalization of society. As a result, by the end of May 2025, revisions to approximately 98 percent (7,983 out of 8,162) of the targeted laws, regulations, and administrative circulars were completed. It is fair to say that the analog regulations mentioned above have now been almost entirely eliminated. Furthermore, the government has indicated a policy of conducting strict reviews to ensure that future policymaking does not hinder the utilization of AI, digital, and other technologies (a process known as Digital Legal Review).
However, while these regulatory reforms clarified that AI systems may be used in place of humans, it does not specify what kind of AI system is acceptable as a substitute. It also leaves unclear the question of who is liable if a compliance violation occurs due to a system malfunction of such AI systems. Without clarity on these points, it will likely be difficult for businesses to use AI systems for compliance purposes.
To resolve these issues, it is crucial to first specify the risk levels required by laws and regulations. Building upon that, establishing a framework for evaluating the safety of AI systems is essential. In this context, the vision paper published by Japan’s AI Safety Institute in July 2025 sets the development of an AI safety evaluation framework as a strategic goal. Key elements of this goal include establishing methods for AI system conformity assessment and for data quality management. The paper also indicates that, in terms of specific sectors, the institute will focus particularly on healthcare and robotics. The government’s policy, therefore, is to move forward with the development and implementation of evaluation methods for AI system safety, keeping specific use cases in mind.
2. Easing of Data Subject Consent Requirements in the Act on the Protection of Personal Information
Japan’s Act on the Protection of Personal Information (APPI) generally obligates businesses to obtain the consent of the data subject when providing personal information to a third party or when acquiring sensitive personal information. However, it has been pointed out that uniformly requiring consent even in cases where the risk to the individual is low creates a problem, making it difficult to analyze data held across multiple businesses. Therefore, an amendment to the APPI is being considered by the Personal Information Protection Commission (PPC) that would allow the third-party transfer of personal information and the acquisition of sensitive personal information without the individual’s consent in cases where the risk is objectively low. Specifically, this would apply when it is guaranteed that the information will be used only for the creation of statistical information—a category that includes AI development.
If the easing of consent requirements for personal data is realized, it would be a major advantage for AI development.
This legislative attempt to rebalance the protection of data subjects with the promotion of data utilization is reminiscent of Japan’s 2016 amendment to the Copyright Act. The revised Copyright Act stipulates that using copyrighted works for AI development without human enjoyment of the work’s expression does not constitute copyright infringement unless it unfairly harms the copyright holder’s interests. This change has significantly contributed to the promotion of AI development in Japan. Similarly, if the easing of consent requirements for personal data is realized, it would be a major advantage for AI development.
However, the question of who will and how to guarantee that this personal information is truly used only for creating statistical information requires more detailed consideration. According to a document from the PPC, several measures have been suggested. These include: (1) the public disclosure of facts regarding the data provider, the recipient, and the nature of the statistical analysis; (2) a written agreement between the provider and recipient stating that the data is being provided solely for statistical purposes; and (3) obligating the data recipient to not use the data for any other purpose. Even with these suggestions, further examination of the governance and monitoring required to ensure their effectiveness will be necessary.
Leading by Example: The “Government AI” Initiative and Public Sector Adoption
The Japanese government has declared that it will not only promote AI utilization by the private sector but also actively leverage AI itself. A policy has been outlined in which the Digital Agency will develop a “Government AI” to enable the use AI across all ministries and agencies.
To ensure proper governance for the introduction of AI in government, the Guidelines for the Procurement and Utilization of Generative AI for the Evolution and Innovation of Public Administration have been established. These guidelines stipulate the appointment of a Chief AI Officer in each ministry and agency to oversee the adoption and promotion of generative AI and to manage governance and risk. They also define a process whereby the Digital Agency’s Advisory Board for Advanced AI Utilization will provide advice and support for high-risk AI applications. Furthermore, the guidelines provide practical tools such as flowcharts for assessing the risks of specific AI systems and checklists for risk assessment items and contract clauses, which also serve as important reference materials for AI risk management in the private sector.
Defining the Nexus Role: How Japan Navigates U.S. and EU Strategies
How can the Japanese approach described above be positioned in contrast to the AI governance policies of the United States and the European Union? The following section will compare Japan’s policies with the contents of the AI Action Plan in the United States (July 2025) and the AI Continent Action Plan in the European Union (April 2025).
1. Common Ground Among Japan, the United States, and the European Union
A comparison of the AI governance policies of Japan, the United States, and the European Union reveals the following commonalities.
First, all three position AI as a top priority of their national strategies and are powerfully promoting R&D and human resource development. Japan has outlined a policy for government-led infrastructure development and talent cultivation, aiming to become the “world’s most AI-friendly country.” The United States, under the banner of “winning the AI race,” is focused on accelerating innovation. The European Union, through its “AI Continent” concept, is advancing research and skills development in an integrated manner with initiatives like GenAI4EU and the establishment of an AI Skills Academy. It is crucial to understand that modern AI governance is not solely for protecting citizens from new risks; rather, it serves as a driver for actively accepting certain risks in order to promote the proactive development and utilization of AI.
Second, all three have demonstrated a policy of their governments taking the lead in utilizing AI to improve administrative services and encourage AI adoption across their nations. Given the currently low rate of AI adoption in the country, Japan has committed to its “government [taking] the lead in using AI.” The United States has a specific action plan to accelerate the adoption of AI in government, and the European Union has set AI “adoption by the public sector” as a clear target within its Apply AI Strategy. In all three cases, these governments are attempting to spearhead AI adoption through government procurement and public services. In this context, the risk assessment criteria used by governments for AI procurement will likely become an important reference document for evaluating AI governance in private sector services as well.
A third common feature is the promotion of R&D on scientific methods for risk evaluation and the push for their standardization to realize safe and trustworthy AI. Japan’s policy is to use its AI Safety Institute as a hub for research on safety evaluation and to engage in international standardization activities at forums like the International Organization for Standardization. The United States is also building an AI evaluation ecosystem, centered around the National Institute of Standards and Technology (NIST). To support the smooth implementation of its AI Act, the European Union is promoting the establishment of AI testing and experimentation facilities and the development of harmonized European standards.
It is crucial to understand that modern AI governance is not solely for protecting citizens from new risks; rather, it serves as a driver for actively accepting certain risks in order to promote the proactive development and utilization of AI.
Fourth, all parties place a strong emphasis on public-private partnerships that link government with industry and academia. Japan aims to establish “risk governance through the cooperation of public and private sectors,” where the government acts as a control tower while respecting the fundamental autonomy of businesses. The United States also supports private sector-led innovation through mechanisms such as NIST consortia. The European Union has clarified its plans to advance large-scale projects, such as “AI Gigafactories,” through public-private collaboration.
In light of the points made, the policies of Japan, the United States, and the European Union share many common points. On the other hand, there are also important differences. The following sections will compare the unique commonalities and points of divergence between Japan and the United States and Japan and the European Union.
2. Comparison with the United States: Hegemony or Interoperability?
In addition to the commonalities mentioned above, Japan and the United States share a similar regulatory approach. Both favor sector-specific and use-case-based regulation tailored to risk, rather than being bound by a cross-sectoral, comprehensive law. They both seek to address the risks posed by AI by leveraging existing legal and regulatory frameworks in specific fields such as healthcare, finance, and autonomous driving.
On the other hand, there are differences in their policies that reflect their specific underlying national strategies. The AI Action Plan for the United States explicitly sets competition among nations and the establishment of technological dominance as its clear goals. This is based on a competitive worldview that positions AI as a strategic asset capable of altering the geopolitical power balance.
In contrast, Japan’s policy prioritizes fostering public trust in AI systems. It is an industrial policy-focused and cooperative approach that aims to promote social implementation by creating a safe and secure environment for the development and use of AI systems, which in turn is expected to boost innovation.
Their stances on intervention in the output of AI systems are also different. The U.S. Action Plan interferes with the content of AI output itself, aiming to actively eliminate specific content—which is sometimes criticized as “Woke AI.” Specifically, it calls for the removal of references to diversity, equity, and inclusion and climate change from the NIST AI Risk Management Framework and wants AI procured by the government to pursue “objective truth.” The United States then aims to promote these value-laden AI standards to its allies, along with its AI stack exports. This can be described as a strategy to universalize the values of the current administration as a global standard.
Whereas the United States aims for the export of its own values, Japan aims for interoperability based on the premise of diverse values.
Japan’s approach is significantly different. While Japanese policy documents speak of fundamental values such as the rule of law, human rights, democracy, diversity, and fairness, they make no mention of content intervention to exclude specific political or social ideologies from AI. Instead, Japan centers its international strategy on the G7’s Hiroshima AI Process with the stated goal of ensuring interoperability that presupposes the diversity not only of systems but also of the values of each country. This approach seeks to form a cooperative international order where different values can coexist and interlink, rather than attempt to propagate a specific set of values globally. In short, whereas the United States aims for the export of its own values, Japan aims for interoperability based on the premise of diverse values. This clearly illustrates the fundamental ideological difference between the two countries regarding international order and norm-setting.
3. Comparison with the European Union: How to Position the Relationship Between Humans and AI
When comparing the policies of Japan and the European Union, the most straightforward difference is the question of whether to regulate AI comprehensively. Unlike the European Union, Japan takes the approach of updating existing regulations for each sector. A more detailed analysis of this divergence reveals two essential points of difference: (1) whether regulations on AI systems intervene in human evaluation and emotions, and (2) whether there are legal mandates for specific AI governance processes.
Regarding (1), the EU AI Act treats applications that involve the evaluation of humans or intervention in their inner state—such as in human resources evaluations, credit scoring, and emotion inference—as high-risk applications and subjects them to additional regulations (Annex III). In contrast, Japan handles these issues within the scope of existing labor laws and financial regulations without imposing special obligations on AI. Underlying this, one can discern a cultural difference: Whereas the European Union positions AI strictly as a human tool and thus perceives a risk in AI evaluating humans, Japan is more accepting of a horizontal relationship between humans and AI. The fact that the European Union mandates human oversight for high-risk AI, while Japan sets no such special obligation, can be seen as a manifestation of this difference in their views on AI.
These differences are based on cultural perspectives of the relationship between AI, robots, and humans, as well as on different approaches to regulatory policy; neither is inherently more correct or superior. What is crucial is to ensure the interoperability of rules between nations that take such different approaches.
Regarding (2), the EU AI Act obligates providers of high-risk AI systems to establish risk management systems, create technical documentation, ensure transparency, and establish quality management systems. While these elements themselves are largely in common with Japan’s AI Guidelines for Business, in Japan’s case, these processes are merely guidance for businesses to reference. Failure to comply with them does not immediately constitute a legal violation. In other words, the key difference is that in the European Union, law enforcement can be triggered by the failure to follow a specified process. In Japan, such processes are something businesses should undertake voluntarily, and the ultimate decision on regulatory enforcement is based on whether a harmful outcome has occurred.
These differences are based on cultural perspectives of the relationship between AI, robots, and humans, as well as on different approaches to regulatory policy; neither is inherently more correct or superior. What is crucial is to ensure the interoperability of rules between nations that take such different approaches. Indeed, through the Hiroshima AI Process, Japan succeeded in consolidating the common approaches of G7 countries on AI governance into 12 guiding principles and 11 codes of conduct. Going forward, contributing to the creation of rules and standards that are both rooted in the values of each country and as interoperable as possible—through forums such as the OECD-led Hiroshima AI Process reporting framework, the network of AI safety institutes, and the development of international standards—will be the key to forming a pluralistic and innovation-friendly international order for the AI era.
Conclusion
In 2025, the global trend surrounding AI significantly shifted from regulation toward the enhancement of competitiveness. Amid this geopolitical dynamism, Japan is building a unique position under the consistent banner of becoming the “most AI-friendly country in the world.” Its approach is characterized by rulemaking through guidelines and standards within the framework of existing laws, conducted with the involvement of multistakeholders, and an emphasis on an agile process of continuous evaluation and updates. The newly enacted AI Promotion Act can be understood as a mechanism to ensure the entire government can respond swiftly to changes in AI technology and risks, enabling the Cabinet Office to serve as a “control tower” for risk assessment and the consideration of countermeasures.
This Japanese policy holds significant meaning for the formation of global rules and norms on AI. Japan, advocating for an agile policymaking process that presupposes constant technological and social change, does not aim to export a set of “universal” values to the international community. Instead, it pursues interoperability based on the premise that countries have different values and best practices. The establishment of the Guiding Principles and Codes of Conduct within the Hiroshima AI Process and the subsequent implementation of its monitoring process can be seen as a successful example of this interoperability approach. This cooperative stance, which assumes the coexistence of diverse values and systems, suggests the potential for Japan to serve as a trusted nexus in the often-fragmented landscape of global AI rulemaking. Japan’s approach could serve as a constructive model for achieving a more pluralistic and innovation-friendly AI governance framework.
Hiroki Habuka is a non-resident fellow with the Wadhwani AI Center at the Center for Strategic and International Studies in Washington, D.C.
This report is made possible by general support to CSIS. No direct sponsorship contributed to this report.