Unpacking Japan’s AI Policy with Hiroki Habuka

Photo: Yuichiro Chino/Getty Images
Available Downloads
This transcript is from a CSIS event hosted on March 21, 2025. Watch the full video here.
Gregory C. Allen: Good morning. I’m Gregory Allen, the director of the Wadhwani AI Center here at the Center for Strategic and International Studies. Today we’ve got a fabulous discussion ahead, “Unpacking Japan’s AI Policy” with Professor Hiroki Habuka. Hiroki, thanks so much for joining us today.
Hiroki Habuka: Thank you, Greg. Thank you very much for inviting me today.
Mr. Allen: And Hiroki, you are a non-resident senior associate here in the Wadhwani AI Center, where you’ve published a series of fabulous reports unpacking all aspects of Japanese AI policy. But it’s worth pointing out that in addition to your work at CSIS you’re also a research professor at the Kyoto University Graduate School of Law. And previously, were the deputy director for global digital governance at the Japanese government’s Ministry for Economy, Trade, and Industry, where you led several projects on digital governance including artificial intelligence, privacy protection, data security, and digital platform regulations. And more recently, you just became the representative director of AI Governance Association of Japan, which is a really important body in the Japanese AI ecosystem.
So that is an incredible set of experiences. The perfect reason why we wanted to have you on today, to unpack what’s going on in Japan as well as your most recent report. But before we get into Japan’s evolving approach to AI policy, could you just say a few words about your background and how you developed your expertise in AI policy?
Mr. Habuka: Thank you very much for the kind introduction.
So you already covered most of my careers, but so just to add a few things. So, first of all, I’m a lawyer qualified in Japan and New York state. And while I was studying in the States, West Coast, I found that amazing pace of development of digital technologies. And that is why I was interested in policy emerging in the digitalized society. So that is why I went to the Ministry of Economy, Trade, and Industry emerging. And eventually I wrote several white papers on so-called agile governance. Agile governance is the idea that, in a rapidly evolving and increasingly complex world, our entire social system should be updated continuously in a flexible manner. And now today the agile governance concept has become a foundation for Japan’s digital policy, including AI governance policies which we are discussing today.
Mr. Allen: And I think you’re underselling yourself a bit. I mean, those white papers really were the foundation of Japanese AI governance policy, which is why we’re so grateful to have you here. I wanted to ask you about the AI Governance Association of Japan, where you’re the representative director. What is this body? Who are its members? And what does it do?
Mr. Habuka: The AI Governance Association of Japan is composed of nearly 100 private companies from mainly Japan, but also from the U.S., Europe, and Asia. And we are working together to share best practices and also develop policy recommendations for kind of a responsible AI governance, because nowadays the government only cannot handle everything. So the input from private sector is really important, and that is what we are going – doing in the association.
Mr. Allen: That’s terrific. So now we’ve talked a little bit about you. And your story is interwoven with the history of AI policy in Japan. But sort of help us understand – before we get to the present day, help us understand the past. Like, what is the history of Japan’s AI governance efforts? What was going on a decade ago? What was going on five years ago? How did we get to where we are now?
Mr. Habuka: So the Japanese AI governance policy has actually – has not dramatically changed since around 2016.
So we have mainly three pillars for AI governance. The first one is promoting the development and use of AI across society. The second pillar is taking a sector-specific regulatory approach rather than holistic approach. The third pillar is that pursuing an agile and multi-stakeholder governance model rather than the top down or command and control type of governance.
So if I may, maybe I can just briefly wrap up the story based on those three pillars.
Mr. Allen: Mmm hmm.
Mr. Habuka: Yeah. So the first pillar, as I said, is the promotion of AI. So a major milestone in this policy came in 2016 so it’s about already almost 10 years ago when the Japanese government introduced the concept of Society 5.0.
The Society 5.0 vision is like a human-centered society where a high degree of integration between cyberspace and physical space can promote economic development and also solve societal problems.
So now while economic growth is certainly a major factor behind the push for the AI but Japan’s approach is also kind of shaped by some unique societal and cultural factors. The societal factors we are – our population is dramatically aging. The challenge would be, like, aging population and labor shortages.
The demographic shift has made AI an essential tool for kind of maintaining productivity and sustaining people’s lives, and also as a cultural background Japan has a very kind of a robot-friendly culture so it has long embraced the idea of robots coexisting or living with humans.
Mr. Allen: Yes. The last time I was in Tokyo I went to the Museum of Robotics which was a great deal of fun for me.
Mr. Habuka: Oh, wow. Good. Yeah.
Mr. Allen: Yeah.
Mr. Habuka: Exactly. That’s the point. So we are already familiar with robots, like, friends of ourselves. And you know, this concept has been a common phenomena in Japanese animation, manga, or literature, and robot museum as well, and maybe it’s rooted from ancient animistic beliefs where spirits exist not only in humans but also in animals, trees, or even in stone. So why not in computers?
For those reasons, Japanese society or Japanese policymakers strongly have pushed adoption of AI. So that was the first pillar.
The second pillar is a sector-specific approach rather than holistic approach. So rather than introducing a single one-size-fits-all AI law, Japan regulates AI through existing legal frameworks within each industry.
The reasoning or rationale is that AI operates by analyzing large data sets and making decisions based on statistical and probabilistic methods. So it’s like statistics and probability. And therefore, the risks present are not necessarily new risks; rather, AI often amplifies existing risks.
So in this context, Japan has introduced regulatory reforms across multiple sectors including autonomous driving, AI-powered medical devices, or AI-based credit assessments, and so on. So now Japan allows in almost all sectors AI-driven compliance across many regulated industries, which is a significant step toward AI adoption.
And last but not least, the third pillar is agility and multi-stakeholder cooperation, so – and it is based on the strong reliance on soft law, which means guidance or standards which has no legally binding power. The importance of soft law was explicitly recognized in 2021 white paper published by METI, Ministry of Trade and Economy – Economy, Trade, and Industry.
So the white paper stated that due to difficulties in keeping up with the speed and complexity of AI innovation the government should provide nonbinding guidance to support private sector’s initiatives. Also, METI said that such guidance should be based on multi-stakeholder dialogue and continuously updated to keep pace with AI advancements. So this is–
Mr. Allen: So this soft law approach, the nonbinding guidance, in one sense it is voluntary. But can adherence to the guidance provide benefits for companies, either in terms of what their customers are demanding from them or, alternatively perhaps, their liability risk and exposure to lawsuits? Does it come up in those areas?
Mr. Habuka: Exactly, yeah. So how to connect soft law, I mean, guidance or standards, with hard law, which is a law with sanctions? It’s a big question because sometimes, for example, under the EU AI Act, some standards are – has some legal effect under the AI Act. So if companies comply with their European harmonized standards, then you are presumed to be compliant with the AI Act. So likewise, under Japanese law some standards or guidances are somehow connected with hurdle. But so far, most guidance or standards is just reference materials. But yet, it helps companies to be responsible in terms of AI adoption. Or, sometimes it will reduce the risk of civil suit against the company, if the company follows the soft law.
Mr. Allen: That’s great. So we’ve talked a lot about the what and the when evolution of AI policy in Japan. Let’s talk about the who. So who are the key government institutions that shape Japan’s AI policy and AI governance approach?
Mr. Habuka: Oh, this is a very difficult question because there are a lot of agencies that are related to AI policies. And this is, of course, because AI is now adopted in all sectors. So at the top we have the Cabinet Office. So we have recently – the government has recently published a new bill, which maybe I can touch on later. But the new bill proposes the creation of the AI strategic headquarters, which is chaired by the prime minister. So this clearly signals how seriously Japan is taking AI, aiming to ensure a coordinated national strategy from the highest level of government – because Cabinet Office is a kind of, you know, highest organization under the Japanese government.
On the other hand, up to now Japanese AI policy has been led primarily by two ministries. One is the Ministry of Economy, Trade, and Industry, or METI. And the ministry – and the other is the Ministry of Internal Affairs and Communications, or MIC. So both ministries jointly issued the AI guidelines for businesses last year, which serves as a comprehensive risk management framework for private companies. And on top of that, METI’s main focus is mainly on promoting AI, especially in the business sector. So METI recently published a checklist for AI-related contracts, helping businesses really allocate risks and benefits between AI developers and users.
And under METI, there is another very important institution, which we already slightly mentioned, which is the AI Safety Institute, or ASI. So ASI has released guidance on topics such as AI model evaluation or red teaming methodology, which is confirming whether this AI model is safe against cyberattacks. And also, the ASI published data quality management guidances. And those guidances are often – published often in collaboration with international counterparts. So these tools are not meant to restrict AI, but rather to facilitate safe and responsive use of AI.
On the other hand, MIC focuses more on the ethical and societal dimensions of AI. So MIC has played a leading role in formulating the human-centric AI principles, which emphasize values such as human dignity, fairness, or sustainability. And MIC is also representing Japan in international forums like G-7 and the OECD. And the MIC took the lead in the negotiations of the G-7 Hiroshima AI process, which is a very impactful international collaboration for responsible AI governance.
And there are another key player, which is Digital Agency. The Digital Agency’s primary mission is to digitize government services and build digital infrastructure, but it also contributes significantly to policy reform. For example, they eliminated the analog – like, so-called analog regulations, which is like removing paper documentation duties or full-time stationing duties or eyesight investigation duties. So the Digital Agency repealed all those so-called analog duties and made it possible for companies to comply using AI and data. So this kind of political lead – political initiative was led by the Digital Agency.
And on top of that, there are several sector-specific regulators such as, for example, Personal Information Protection Commission, who obviously takes care of the privacy data, or Agency for Cultural Affairs that takes care of the copyright regulations. Japan also has a very unique copyright law to promote use of AI. And also, for the financial sector FSA or Financial Services Agency is actively encouraging the responsible use of AI. So FSA just recently issued a discussion paper analyzing AI applications and governance in the finance industry. And I mean, after all, it’s truly a whole-of-government agenda, you know.
Mr. Allen: This is really – this is really interesting. So if I – if I understand you correctly, you sort of have at the top the prime minister’s Cabinet Office.
Mr. Habuka: Yeah. Yeah.
Mr. Allen: And then you have sort of two cross-sectoral drivers, which are METI and MIC.
Mr. Habuka: Yeah.
Mr. Allen: And then you also have all the sector-specific type functions like financial regulation, earlier you mentioned autonomous driving and AI medical, you know, type things. Presumably, the relevant ministry of government for those specific areas is doing various tasks related to AI governance in those areas. Is that correct?
Mr. Habuka: Yes, correct. And in addition to METI and MIC, the Digital Agency is also another player who presently takes care of digitalization. And the Cabinet Office actually hasn’t had a clear mandate for coordinating all this AI policy. And that is why the recently published interim report, as well as the recently published new bill, provides that the Cabinet Office will now establish AI Strategy Headquarters.
Mr. Allen: I see. So we’ll get to that bill momentarily, but let’s start with the interim report.
So this came out in early February. Japan’s AI Policy Study Group released an interim report which provides recommendations for the country’s approach to AI regulation. So what’s in this report? And how does it compare to what Japanese policymakers had previously said last year and earlier?
Mr. Habuka: Yeah. Sure. So maybe it’s better to start from providing some backgrounds. So even though I said that the Japanese basic policy was to leverage existing regulations, but in early 2024 – so just a year ago – it seems that Japan was on track to kind of introduce new regulations targeting powerful AI models. You remember 2023. That was the year when ChatGPT, Gemini, and other generative AI hit the big boom. And –
Mr. Allen: And there was that letter signed by the CEOs of many companies talking about the extinction of humanity from AI. That, of course, got a lot of minds focused around the world, including at the U.K. AI Safety Summit, on the sort of catastrophic risk potential of frontier AI – not sort of everyday AI, but of the absolute most advanced, you know, what might be coming down in the future.
So Japan, like many countries around the world, was saying perhaps we will deviate from our sector-specific approach specifically in the case of frontier AI models. And you’re saying in early 2024 it was looking like they were going to do that.
Mr. Habuka: Exactly. So as I remember, the U.K. AI Safety Summit was November 2023.
Mr. Allen: Yes.
Mr. Habuka: And it was a U.S. executive order – I think Biden’s executive order was issued in October 2023. So that was a time when all the regulators of all countries are interested in regulating those powerful model(s), and Japan was on the same track.
So, basically, two key documents signaled the stronger regulation on powerful models. The one is a concept paper issued by the Liberal Democratic Party, which is the majority party – which was the majority party and ruling party in the Diet; and also, the white paper from the Cabinet Office which was issued in May. So both papers recommended regulatory measures for large-scale foundational models like other countries like EU and the U.S., where stronger AI oversight was gaining kind of momentum.
However, as I said, February this year – so one year later – the Japanese government’s AI Policy Study Group, which is an expert committee under the Cabinet Office, issued an interim report that laid out, apparently, a different approach. While earlier policy papers leaned toward more stringent oversight against advanced AI models, the interim report takes a more cautious stance. I mean, instead of creating new AI-specific regulations it reinforces Japan’s sector-based approach, meaning that existing laws will continue to govern AI risks within their respective domains. The report also emphasizes voluntary risk-mitigation efforts by businesses, while ensuring that the government closely monitors emerging AI risks and takes action when necessary.
So one of its key proposal(s) was the creation of government-led strategic leadership body, not for enforcement but to collect information and foster coordination on AI-related risks and incidents. Unlike earlier proposals, the strategic body, the new – the interim report does not include legal sanctions but rather aims to enhance government oversight without stifling innovation. And also, the interim report suggests that Japan’s AI governance model will largely rely on businesses’ kind of voluntary initiative(s), as I already mentioned.
However, leaving risk management entirely in the hands of businesses does not guarantee that all companies will be able to address these challenges effectively. And this is particularly true for startups, which, I mean, despite being key drivers of AI innovation, often lack the resources needed for robust safety and governance measures.
So, to bridge this gap, the report proposes a more active role for the government, so – not through strict regulation, but by establishing a strategic leadership body. And also, they will – they are going to guide businesses and lower the barriers to responsible AI adoption.
Mr. Allen: Can you elaborate on that point? So you’re sort of saying that the large companies, they have significant compliance resources. They have significant self-safety evaluation resources. But maybe smaller companies, startups, don’t necessarily have these capacities internally. So what exactly, you know, services is the Japanese government body going to provide that would lower the barriers for startups in having, you know, stronger and more effective safety measures?
Mr. Habuka: Yeah, that’s a great question. So one of the key functions of the strategic body will be providing or detecting and identifying where we are going to need guidance or clarification/interpretation of existing regulations. And already the government has issued a lot of guidance of that kind, for example for intellectual property or privacy or disinformation risks, et cetera. So what we need is not an extra regulation, but, like – but clarification of interpretation of the existing laws and also some reference materials, especially to support startups and SMEs to implement responsible AI governance with low cost.
Mr. Allen: I see. That makes a lot of sense.
So you, through CSIS, just published a terrific report about, you know, this shift in Japan’s posture towards AI regulation – some things being consistent, like the sector-specific, but this pivot specifically with relation to frontier AI models. I encourage, of course, everyone to go read that report, which is fabulous, but is there anything else that you wanted to highlight surrounding what we were just talking about, that came out of that report?
Mr. Habuka: Sure. So now you mentioned the policy pivot, but actually I want to emphasize that the interim report is aligned with Japan’s traditional AI government policy.
Mr. Allen: Yeah. It’s more like the failure to complete – well, it seemed like it was going to be a pivot, rather than actually a pivot in itself. It seemed like it was going to change, but in fact it just continued. And this report sort of cements that, yeah, consistency.
Mr. Habuka: Yeah, exactly. So, if anything, the 2024 Liberal Democratic Party’s paper, and the subsequent white paper, were the kind of exception. And maybe the 2024 movement was likely being influenced by the policy developments in the EU, the U.S., and other allies. So with that in mind, the shift in policy reflects several key factors. The first one is that AI risks are highly complex and there are still challenges in identifying and assessing the safety of advanced models. So that is one of the reasons why we have not yet reached the conclusion to regulate new advanced models.
And second reason is the global regulatory landscape is evolving. Like the U.S., for example, moved away from AI regulation after the new administration, like, Mr. Trump repealed the Biden executive order on the exact day he came into office. And at the same time, concerns in the European Union about overregulation potentially hindering innovation have led to calls for a more balanced approach, as we saw in the EU’s Competitive Compass report which was issued in the end of January, which really concerns about the overregulation of the EU. And also, we saw the traction in the AI Action Summit in Paris, held in February, where global leaders were more happy to collaborate for AI opportunities rather than AI risks.
And third –
Mr. Allen: That’s terrific. So – oh, sorry. Finish your thought.
Mr. Habuka: Sorry. So the third, but not least, domestically Japan’s political landscape has also changed. The general election in October of 2024 resulted in a fragmented Diet, making it more difficult to push through major legislative reforms. So given the circumstances, Japan is opting for more flexible business led-AI governance model, rather than introducing sweeping new regulations at this moment.
Mr. Allen: Amazing. So in addition to the report, we also have a bill, which I understand the bill on the Promotion of Research and Development and the Utilization of AI-Related Technologies has been approved by the government and submitted to the Diet, which is Japan’s national legislature. So what does this bill say? And, if passed, what will be its implications for Japan’s AI ecosystem?
Mr. Habuka: Yeah. That’s a great point. So, first of all, let me talk about the status of the bill. So the bill was submitted by the Cabinet to the Diet. But at this moment, we are not 100 percent sure if this bill pass the Diet or not. So at this moment, the opposition parties have not expressed objections to the AI bill. However, given that the ruling party, the LDP, now holds a minority in the parliament, the comprehensive legislative deliberations are taking longer than – longer than usual. So as a result, it remains uncertain whether the AI bill will pass during the parliamentary session.
So let me introduce the contents of the bill. So as the name represents, the bill is designed to promote AI research, development, and utilization, while also ensuring the potential risks are properly addressed. But the main focus is just to promote AI. And it’s not kind of a regulation. So specifically, it sets out the government will provide basic principles for AI, and also create an AI Basic Plan, and establish a new AI Strategy Headquarters which will be chaired by the prime minister. These measures primarily target government institutions rather than private businesses. And the bill does not impose legally binding obligations on companies with any penalties.
So the one unique aspect of the bill is that it actively encourages businesses to leverage AI. So actually the provision states that the companies should strive to enhance efficiency, drive innovation, and contribute to new industries by using AI. So –
Mr. Allen: And does that come – obviously, they’re saying, Japanese businesses, we want you to adopt AI. We want you to lead in AI. But does that come with any other kind of promotion incentives – you know, subsidies, or access to government resources or other things?
So, you know, what is the mechanism by which they’re trying to actually drive increased AI adoption?
Mr. Habuka: Great point.
So the Japanese government has already give several subsidy plan or supporting plan for startups or big companies in deployment or development of AI systems, but under this new bill there is no mention for specific policy measures because those packages, like, policy plans will be provided under AI Basic Plan, which, if this bill is passed, will be provided under this new bill.
Mr. Allen: I see. So if this bill is passed there will be an additional action taken by, I guess, the legislature, called the AI Basic Plan and that is the one that might have financial incentives or other kind of supporting contributions from the government to adopt AI.
Mr. Habuka: Exactly. Correct.
So this bill only provides a framework for a new policy package to come but no specific mention to the actual plans. Yeah. So that is – yeah, that is one point. And also, the bill says the businesses are expected to cooperate with government-led AI initiatives. And this explicit legal encouragement, I would say it’s kind of official encouragement for AI adoption, highlight –
Mr. Allen: So I think – yeah. In the United States many – you know, many times the government will encourage businesses to do something and businesses will say OK, but where’s the money –
Mr. Habuka: Yeah.
Mr. Allen: – and if there’s no money I’m not going to do that.
But I think in Japan actually these kinds of government encouragements are often quite effective in driving private sector cooperation adherence. Is that – do I have it correct?
Mr. Habuka: Yeah, it’s correct. And sometimes we worry about the companies are maybe too compliant and –
Mr. Allen: Too compliant. That’s not a problem we have in the United States. (Laughter.)
Mr. Habuka: There’s a difference – cultural difference. Interestingly, in their report there is a sentence that says given that Japanese companies are too compliant maybe, like, new regulation would make too strong chilling effect on companies, and that is why we shouldn’t directly jump into a new regulation on AI.
Mr. Allen: And it also sort of speaks to why Japan is so comfortable with a soft law approach because historically Japanese companies really do get on board with soft law approaches to an extent that is greater, perhaps, than other countries or regions.
Mr. Habuka: Yeah, that’s correct. So the good side is we are very good at following rules. We are very honest and diligent. But on the other hand, given the speed of the changing in AI area sometimes or most often it’s impossible for the regulations to prescribe everything.
Mr. Allen: Right.
Mr. Habuka: And in this situation if you cannot do any action without clear rules then it’s a problem.
Mr. Allen: I see.
Mr. Habuka: Yeah. In fact, a lot of companies were looking for good guidance, if not regulation, to decide by themselves whether they can adopt this technology or not. So – and the Japanese government is also aware of that. So that is why it is very active on publishing guidances, standards, best practices, et cetera.
Mr. Allen: Great. So I want to shift the conversation now to Japan’s role in international AI dialogs and conversations.
You know, Japan was the president of the G-7 in 2023, which turned out to be a really surprisingly impactful year for international AI governance through the G-7, which as a body had not previously played such a significant role but under Japanese leadership did so. So, now that we’re in 2025, what role does Japan play in international AI policy? And how could this new perspective influence global governance?
Mr. Habuka: Yeah. So maybe let me briefly explain about what Hiroshima AI process is.
Mr. Allen: Yes. And this came out of the G-7 2023, right? Yeah.
Mr. Habuka: Yeah. Yeah, yeah, yeah. So the – you know, the 2023 is the year where we were all – we all were shocked with the emergence of generative AI. And Japan was, I would say, luckily be the chair of the G-7. So that is why Japan led the establishment of so-called Hiroshima AI process, which is addressed to realize the responsible adoption of generative AI systems. And in the end, their members agreed on 12 guiding principles and 11 code of conduct – elements of code of conducts for advanced AI systems. And my research – which you can find on the CSIS website – my previous research found that these principles are well-aligned with existing AI policies in each different country. And now over 50 countries and regions have expressed support for the spirit of the Hiroshima AI process.
And this year, February 2025, the OECD launched their global framework to monitor how these Hiroshima AI guidance is being implemented by companies. And recently – so we’ve seen some development that may appear to signal a slowdown in global AI rulemaking, as I mentioned briefly, such as President Trump’s repeal of the AI executive order, and the U.S. and the U.K. choosing not to sign the joint statement at the Paris AI Action Summit. However, I do not believe these events undermine the broader trend of international harmonization or collaboration in AI governance that Japan has promoted.
Why? Because at the heart of the matter, while all countries are aware that they should accelerate the development and deployment of AI systems, they are facing the same fundamental questions. Which is that regulations cannot keep pace with the speed of technologies or the diversity of social values. So therefore, the real issue is who will fill the gap between regulation and oppression? And if I – if I may simplify things, the U.S. relies on the market-led approach while the EU takes a rulemaking approach. And Japan’s model is different from both sides.
So rather than imposing strict regulations or leaving everything to the market, Japan respects voluntary initiative by businesses, and the government actively supports those efforts through nonbinding and flexible guidances. At the same time, Japan places great importance on collecting and sharing best practices and information on significant risks to ensure appropriate checks and balances in the system. So in this context, I didn’t mention the earlier question but actually the new bill also provides that the government has the capacity to collect information from private parties, both for the best practices and significant risk cases – I mean, incidents.
And based on the knowledge collected from the private parties, the government will learn which kind of guidance they need and how they can collaborate with private stakeholders, or maybe NGOs, or academia, et cetera. So this kind of agile ecosystem of AI governance is what Japan is pursuing.
Mr. Allen: That’s great. And Japan is also a part of the Network of AI Safety Institutes. Now the U.K. has rebranded theirs as the AI Security Institute. Japan has an AI Safety Institute. So can you talk a little bit about where Japan’s AI Safety Institute is and the role that it plays in the Japanese ecosystem, and then, you know, what you see as the future of Japan’s participation in this international network?
Mr. Habuka: Yeah. So ASI, AI Safety Institute of Japan, is established under METI, Ministry of Economy, Trade, and Industry. Which means ASI is an AI promotion organization, rather than AI regulator. And the Japanese ASI has been working closely with the International Network of ASIs to develop guidance on a range of important topics, including AI model evaluation methods, or red teaming methodologies, and also data quality management guidance.
Mr. Allen: Can I ask you? You said that because it’s under METI it has an AI promotion function. And that’s just because METI’s overall goal is to drive economic growth and that sort of thing. It’s not principally a regulatory-focused kind of a body. And what is the mechanism by which, you know, the AI Safety Institute contributes to accelerated AI adoption or accelerated AI growth? Because some people would sort of say that safety and innovation, there’s a trade-off. Or, safety and adoption, there’s a trade-off. But by putting the ASI under METI, it’s clear that Japan doesn’t see that. They see that safety actually can be a mechanism for driving accelerated adoption. Can you just elaborate on the thinking of the Japanese government there?
Mr. Habuka: Sure. At least for Japanese companies, safety is one of the most important things. So Japanese products or services are famous for its safety. And that would have – that has been a core value of our society and business sectors. And when we face new technologies, and technologies especially like AI which is black box in nature and also pretty much complex and path changing, most companies or, you know, business leaders are worry about how to control, or at least mitigate, the emerging risks. But since there are a lot of things to consider, not only its technological aspect but also organizational aspect and maybe regulatory aspect, it’s just too much for business leaders to understand everything, absorb everything, and make decisions on risk taking.
Mr. Allen: I see. So is it fair to say that the Japanese government’s thesis here is that concerns about the risks of AI are – and concerns about the reliability of AI are slowing adoption by Japanese businesses?
Mr. Habuka: Yeah.
Mr. Allen: So if there’s something the Japanese government can do to accelerate improvements in safety, accelerate improvements in reliability, then that will actually accelerate adoption of AI rather than slow it down or hinder it? Do I have it correctly?
Mr. Habuka: Exactly. And our society – Japanese society is not like the U.S. society, where you can just go to court if you have – if you caused any problems. Like, Japanese businesses are really reluctant to go to the court. We prefer ex ante agreement or maybe settlement. And for that kind of culture, it’s important to have, to some extent, an agreed framework or guidance to be responsible for innovation, rather than just make things happen and go to court afterwards.
Mr. Allen: Great. And so now I want to shift gears a little bit because, of course, the U.K. has rebranded their Safety Institute as the Security Institute and has talked about a lot of the national security implications of AI. Here in the United States, the national security domain was really the focus of AI policymaking for many years, including with the creation of the National Security Commission on Artificial Intelligence. And so I’m curious how Japanese policymakers see the national security implications of AI, whether that’s with the intersection of AI and military capabilities or the competitive risks of China’s rise in AI, as exemplified by products like DeepSeek. How are all of these trends and developments seen in Japan?
Mr. Habuka: Yeah. The internal report has actually identified certain risks associated with national security, such as the facilitation and advancement of CBRN-related development –
Mr. Allen: Can you just – for those who might not be familiar, what is CBRN?
Mr. Habuka: It’s chemical, biological, radioactive, and nuclear. I mean, those kind of really –
Mr. Allen: Weapons of mass destruction type risks, yeah.
Mr. Habuka: Yeah, yeah. So access to those kinds of information would be – would have become much easier compared with before, when we didn’t have any generative AIs. And now also it’s very – it’s not very, but relatively easier to create some kind of computer viruses, or even designing real viruses. Those kind of risks are described as national security threats in the interim report.
And these concerns will be further addressed in the upcoming AI Basic Plan, which will be issued, as we discussed, by the newly proposed AI Strategy Headquarters. And maybe the strategy is likely to include measures for responding to national security risks posed by AI. However, at this stage the government has not yet provided any concrete details on how those risks will be addressed. So let’s see about that kind of national security threat in the near future.
And you also mentioned the DeepSeek shock, and so let me turn to that agenda. So, very interestingly, the big shock, which is the emergence of a high-performance, small-scale, and low-cost AI model developed in China, so that event basically happened in January 2025. And very interestingly, the interim report was drafted and released for public consultation before DeepSeek shock, and the final version was published in February after the DeepSeek shock. So despite this timing, the DeepSeek emergence had almost no impact on the content of the interim report. Actually, the report had already dismissed the idea of regulating AI models based solely on the size.
Mr. Allen: Mmm.
Mr. Habuka: Yeah. It reinforced the principle of risk-based regulation, meaning that any future AI rule should focus on actual risks rather than technical specifications such as model size.
And of course, concerns over national security risks remain, particularly regarding AI models – like, Chinese AI models. For example, a key issue raised in Diet deliberations was that DeepSeek reportedly described Senkaku Islands, which belong to Japan, as China’s inherent territory, aligning with Beijing’s official stance. So in response in the Diet Prime Minister Ishiba reaffirmed the approach outlined in the interim report, stating that Japan would move forward with legislation allowing the government to issue administrative guidance first followed by stricter measures if necessary.
Mr. Allen: That’s great. Thank you so much for that overview.
And I want to shift gears one more time, which is around the private sector AI ecosystem in Japan. So how are – you know, what are Japanese companies doing in AI? Are there any companies that stand out to you as particularly noteworthy? To what extent is there partnership between Japanese companies and American firms? To what extent is there Japanese firms leading with their own internal innovation? You know, how would you characterize Japan’s AI private sector ecosystem?
Mr. Habuka: So in an aging society like Japan where the labor force is steadily shrinking, AI innovation and adoption have become national priorities to help fill workforce gaps and also drive economic growth. AI is actually expected to bring transformation across a wide range of sectors, from manufacturing and health care to agriculture and, of course, entertainment. And its impact on the Japanese economy could be profound.
For example, according to one estimate generative AI alone could contribute up to 1 trillion U.S. dollars in productivity gains in Japan. And in the manufacturing sector, efficiency improvements of 20 to 30 percent are anticipated – and Japan’s particular strength in manufacturing and robotics making these areas especially promising. And also in agriculture, for example, the introduction of AI and robotics is seen as a way to address labor shortages and improve productivity. And also in service industries like finance or retail and logistics, now those companies are using AI in various services. And in entertainment industry as well, such as animations/gaming, AI is beginning to play a critical role in creating new forms of content.
And of course, the Japanese government and companies are very happy to collaborate with U.S. companies. Already, Mr. Son Masayoshi announced a huge amount of investment on U.S. AI-X system, and it is not only for the American companies, of course, but also for us, Japanese companies, to utilize cutting-edge technologies, services, infrastructure in the U.S., and to learn from U.S. best practices to realize more trustworthy and also high-technology AI systems.
Mr. Allen: Terrific.
So we’re three months into 2025. What are you looking ahead on the – for the remainder of the year? What are the major milestones that you’re expecting when it comes to AI policy in Japan?
Mr. Habuka: Yeah. So I think so far we don’t have any specific plan for new regulation systems or new kind of promotion plan for AI ecosystem. So a lot of information would be described under the AI Basic Plan, which, if the bill is passed, will be –
Mr. Allen: And this bill that’s under consideration, if it is passed or if it is voted down, when might that take place?
Mr. Habuka: I think it is going to be by the end of this year, so we have still long way to go. But –
Mr. Allen: Ah. So this bill could be negotiated and changed for quite some time.
Mr. Habuka: Yeah. Yeah.
Mr. Allen: Mmm hmm, great. And so then, after that, if they pass that bill, then that would create the process for creating what is called the AI Basic Plan, which presumably would take place in 2026, then. Is that right?
Mr. Habuka: Yeah. That is right.
Mr. Allen: Wow. That’s going to take a long time, and AI’s going to change quite a bit over the next two years.
Mr. Habuka: Exactly. That’s a point. But before the Basic Plan, I mean, we already have – without the Basic Plan, we have already implement a lot of promotion policies for AI-X system. For example, METI has launched a new project which is called GENIAC, which supports really good startups and also big companies who develop a cutting-edge generative AI to utilize the government infrastructure – or, I mean, the latest infrastructure, for extremely cheap cost were almost free. So for those companies who passed the selection for that program, they can – they have a huge advantage in developing big and high-quality AI models.
So likewise, the government can issue, like, supporting packages, even before the basic plan. And also, for the regulatory side, we already has a lot of regulators who can enforce the laws if AI technology – regardless of AI is related or not. So if you breach, for example, Privacy Protection Act, you will be, of course, enforced by the Privacy Protection Commission. But, at the same time, it is important to keep the dialog among enforcers or regulators and private sectors. And that is actually how the AI Governance Association and various regulators are always continuing our dialog.
Mr. Allen: Yes. And something you said just reminded me of something that I wanted to ask you. Which is, earlier in our conversation you talked about how Japan’s copyright regulatory approach is quite unique in the international system, specifically in relation to AI. Can you unpack that a bit? What is special about Japan’s approach to copyright in the AI era?
Mr. Habuka: Yeah. So in 2018, as I remember, we amended the Copyright Act, which basically said that you can use copyrighted contents without the copyright holder’s consent if you use the data only for machine training, without human enjoyment, if it doesn’t materially harm their copyright holder’s interest. So it’s a kind of a complicated –
Mr. Allen: So this is interesting. So you could, for example, train an AI system on every single Disney movie. And you could then have it learn the animation style of Disney movies. And then you could perhaps ask it to generate content in the style of a Disney movie. But you could perhaps not ask it to reproduce a specific Disney movie that is copyrighted, or something to that effect?
Mr. Habuka: Oh, yeah. That is actually a tricky question because if you use only a Disney copyright, I would say, like, Mickey Mouse contents, to output a character which is similar to Mickey Mouse, then it could violate Copyright Act because it might materially harm the copyright holder’s –
Mr. Allen: I see. But if you’re learning it – if you’re using it as training data just to learn, you know, the principles of good animation and drawing, rather than generating copyrighted characters, then that is probably legal? Which could be distinct, depending on, you know, what happens in other countries?
Mr. Habuka: Yeah. Correct.
Mr. Allen: That’s very interesting. And so maybe – for that exact very reason, maybe some American companies will start moving their largest training runs to Japan, just to take advantage of that copyright regulation. (Laughs.)
Mr. Habuka: Yeah. You’re all welcome.
Mr. Allen: OK. Well, Professor Habuka, I think we’re going to have to stop the conversation there because we’re coming up on time.
Mr. Habuka: Oh my god, time flies.
Mr. Allen: Yeah, time flies. But let me thank you so much for sharing your insights with CSIS and with the audience out there watching today. Let me just say one more time, for the audience, that Professor Habuka has published many papers through CSIS which are all really high-quality overviews of the major policy developments in Japan on AI. And so I encourage everyone to go to CSIS.org and read his work there.
Professor Habuka, thank you, again, for being with us.
Mr. Habuka: Thank you very much, Greg. And thank you very much, our audience. I’ve really enjoyed it. And have a good day.
Mr. Allen: Thank you.
Mr. Habuka: Thank you.
Mr. Allen: So this concludes our event on “Unpacking Japanese AI Policy” with Professor Habuka. Thank you so much for taking the time to watch.
(END.)