New Government Policy Shows Japan Favors a Light Touch for AI Regulation

Available Downloads

In early 2024, it seemed clear among the major developed economies that significant new regulatory frameworks for artificial intelligence (AI) were imminent. In the United States, the Biden administration had passed a sweeping AI executive order in October 2023, and congressional leaders were working toward comprehensive AI legislation for 2024. Meanwhile, the European Union was preparing to pass the EU AI Act, which it ultimately did in May 2024.

Japan was also a participant in this prevailing trend. Two key publications from the first half of 2024 strongly signaled that Japan was heading toward new legislation aimed to more comprehensively regulate AI technology: in February, a concept paper from the ruling Liberal Democratic Party, and in May, a white paper by the Japanese Cabinet Office’s AI Strategy team. Both documents recommended the introduction of new regulations for large-scale foundational models. All signs suggested that Japan was moving in parallel with its allies toward establishing a strengthened regulatory structure for AI.

By the end of 2024, however, the prospects for tough AI regulation in the United States and Europe had changed significantly. Not only did the United States never pass AI legislation, but also U.S. voters reelected President Donald Trump, who fulfilled a campaign promise to repeal the Biden administration’s AI executive order on the first day of his second term in office. The European Union, for its part, is still working to implement the AI Act, but influential documents such as the Draghi report on EU competitiveness suggest widespread concerns in the European Union that its AI regulatory efforts may have gone too far and stifled innovation. These concerns have been carried forward in the European Commission’s white paper “A Competitiveness Compass for the EU,” which emphasizes the necessity of achieving simpler, lighter, and faster regulation. At the AI Action Summit, EU President Ursula von der Leyen committed to reducing bureaucratic hurdles. Meanwhile, the French government is reportedly working to ensure that implementation of the AI Act is more focused on promoting innovation and less focused on regulating potential harms than drafters of the legislation anticipated.

Japan, like its U.S. and EU allies, is hitting the brakes on the race to regulate AI. On February 4, 2025, the Japanese government’s Cabinet Office released the interim report of its AI Policy Study Group (henceforth “the Interim Report,” or “the Report”), which outlined a very different vision for AI regulation than the country’s two reports from the first half of the previous year. This CSIS white paper outlines the direction of Japan’s AI regulatory approach in 2025, based on the contents of the Interim Report, while also incorporating Japan’s response to the so-called DeepSeek Shock. A summary of the Interim Report is provided in the Appendix for reference.

Maintaining a Sector-Specific Approach to AI Regulation

The AI Policy Study Group is an expert committee established under the Cabinet Office; it serves as the central body overseeing Japan’s AI policy development as a whole, covering both regulatory and promotional policies. In contrast to the ambitious regulatory trends observed during the first half of 2024, the Interim Report published by the AI Policy Study Group in February 2025 adopts a markedly more cautious stance. The Interim Report underscores Japan’s preference for relying on existing sector-specific laws rather than imposing sweeping AI-specific regulations, in accordance with the principle of technological neutrality. It also highlights the importance of voluntary risk-mitigation initiatives undertaken by businesses, while committing the government to the continuous monitoring of emerging risks and necessary measures. As part of this effort, the Report suggests new legislation to establish a government strategic leadership body to enable the collection of necessary information for policies and cooperation on information about major incidents, but without legal sanctions. This marked departure from earlier proposals is reflective of the complex nature of AI risks, the current limitations in assessing the safety of advanced AI models, and the broader trend of regulatory easing observed under the new Trump administration—whose repeal of the Biden-era AI executive order set a clear and early precedent. Additionally, the outcome of the October 2024 general election in Japan, in which the ruling Liberal Democratic Party lost its majority, has resulted in a fragmented Diet, making the advancement of ambitious legislative reforms a formidable challenge.

AI systems are designed to make sophisticated inferences and decisions based on large datasets and numerous parameters using statistical and probabilistic methods. In this sense, the dangers posed by AI are less about fundamentally introducing new types of risks and more about amplifying existing risks. From this perspective, the stance outlined in the Interim Report, which places greater reliance on existing legal frameworks and voluntary industry measures, appears reasonable and consistent with Japan’s established policy approach.

Balancing Business-Led AI Governance and Strategic Government Leadership

In the Interim Report, it appears likely that the Japanese government will rely on businesses’ voluntary commitments to address AI risks under existing laws. However, simply delegating risk management does not mean that all businesses will immediately be able to address risks appropriately. In particular, startup companies—which are key drivers of innovation—often lack sufficient resources to dedicate to safety and governance.

Therefore, this white paper proposes that the government’s newly established strategic leadership body take the initiative in reducing the cost and complexity of responsibly implementing AI across society. This would be accomplished by continuously producing outputs such as clarifications of the interpretation of laws in specific areas, guidance on methods for evaluating AI safety, updates to the AI Guidelines for Business (a risk management framework), and contractual guidelines for entities across the supply chain. In fact, the Japanese government previously pursued similar approaches in 2024, proposing guidance on the interpretation of existing laws as well as directions for new legislative measures in specific areas such as copyrights, other intellectual property rights, and personal information, as well as measures to address disinformation and misinformation.

The Key to Success: Transparency and Effective Guidance

If the Japanese government does successfully transition to a continuous production of outputs relevant to AI, it should move away from closed, non-transparent discussions—like those of the current AI Policy Study Group—and instead adopt a transparent multi-stakeholder process that brings together knowledge from a wide range of fields.

Further, it is essential that, in principle, information requested from private businesses not be conducted for the purpose of criticizing companies; instead, it should aim to gather best practices and conduct proactive evaluations. Sensitive information should be handled cautiously, such as by limiting its disclosure to specific recipients, so as to ensure that the innovation and economic growth incentives of AI developers and providers are preserved. The information collection mechanism should be operated in a way that avoids creating a situation where businesses that cooperate are criticized based on the content they disclose, while those that refuse to cooperate face no scrutiny—a “no good deed goes unpunished” scenario.

Japan’s Response to the DeepSeek Shock

Finally, it is worth touching upon Japan’s response to the DeepSeek Shock. The emergence of DeepSeek—a high-performance, small-scale, and low-cost AI model from China—garnered significant attention in Japan. Interestingly, the Interim Report was drafted and released for public comment before the DeepSeek Shock occurred in January 2025, with only the final version published in February after the event. However, in the end, the DeepSeek Shock had little impact on the content of the Interim Report; the Report had already rejected the idea of regulating only large-scale foundational models (an approach initially proposed in early 2024). Moreover, the Report emphasized that, even if future regulations were considered, risk assessments should be conducted based on actual risks rather than on model size.

Of course, concerns regarding national security risks associated with Chinese-developed AI models remain. For example, during Diet deliberations, it became a point of contention that DeepSeek described the Senkaku Islands—territory claimed by both Japan and China—as “China’s inherent territory, both historically and under international law.” In response, Prime Minister Ishiba adhered to the policy direction outlined in the Interim Report, stating that the government would accelerate preparations for legislation that authorizes the government to first issue administrative guidance and, if deemed insufficient, take stronger measures against AI risks. In addition, on February 6, 2025, the Japanese government issued an advisory to government ministries and agencies regarding the use of DeepSeek. This notice primarily highlighted that data acquired by DeepSeek is stored on servers in China and thus subject to Chinese legal jurisdiction. However, beyond this, it largely reiterated existing guidance issued in 2023 regarding the use of generative AI by government agencies, such as prohibiting the entry of sensitive information into AI prompts and requiring agencies to consult the National Center of Incident Readiness and Strategy for Cybersecurity and the Digital Agency in cases involving systems used for national security and public safety operations, as well as those handling highly confidential information.

As of now, Japan has no specific restrictions on the use of DeepSeek by private entities. In fact, the country’s private sector has largely welcomed the emergence of DeepSeek. This is due not only to its high performance, small scale, and low cost, but also its open-weight nature. Many tech companies have already begun developing and offering their own fine-tuned versions of DeepSeek’s models tailored to their specific needs. Moreover, concerns over national security biases and the absence of built-in safeguards can be substantially mitigated through additional fine-tuning conducted by these companies.

Of course, this does not mean that DeepSeek and its derivative models are exempt from regulation. As analyzed in detail in the table in the Interim Report (see this paper’s Appendix), existing laws already regulate the manipulation of information through disinformation and misinformation as well as the unauthorized use of acquired data. Furthermore, the new legislation hinted at in the Interim Report is designed to enable the government to collect potential legal violations and best practices for risk mitigation, ensuring that emerging AI risks are effectively addressed within Japan’s regulatory framework.

Looking Ahead: Japan’s AI Policy in a Shifting Landscape

As the global outlook for AI governance becomes increasingly uncertain in 2025, Japan’s role in contributing to international rulemaking through frameworks such as the G7 and Organization for Economic Cooperation and Development will grow in importance. It will be a critical test for Japan to design and implement legal systems, under a constructive public-private partnership, that can manage various risks to an acceptable level while maximizing the benefits that AI brings.

Hiroki Habuka is a non-resident fellow with the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).

This report is made possible by general support to CSIS. No direct sponsorship contributed to this report.

Please consult the PDF for the appendix.

Image
Hiroki Habuka
Senior Associate (Non-resident), Wadhwani AI Center