Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency

Available Downloads

Artificial intelligence (AI) is making significant changes to our businesses and daily lives. While AI brings dramatic solutions to societal problems, its unpredictable nature, unexplainability, and reflection or amplification of data biases raise various concerns about privacy, security, fairness, and even democracy. In response, governments, international organizations, and research institutes around the world began publishing a series of principles for human-centric AI in the late 2010s.[1]

What began as broad principles are now transforming into more specific regulations. In 2021, the European Commission published the draft Artificial Intelligence Act, which classifies AI according to four levels and prescribes corresponding obligations, including enhanced security, transparency, and accountability measures. In the United States, the Algorithmic Accountability Act of 2022 was introduced in both houses of Congress in February 2022. In June 2022, Canada proposed the Artificial Intelligence and Data Act (AIDA) in which risk management and information disclosure regarding high-impact AI systems will be made mandatory.

While regulating AI is somewhat necessary for preventing threats to fundamental values, there is a concern that the burden of compliance and the ambiguity of regulatory contents may stifle innovation. In addition, regulatory fragmentation will impose serious costs not only on businesses but also society. How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.

During the 2023 G7 summit in Japan, digital ministers are expected to discuss the human-centric approach to AI, which may cover regulatory or nonregulatory policy tools. As the host country, Japan’s approach to AI regulation may have considerable influence on consensus-building among global leaders. This paper analyzes the key trends in Japan’s AI regulation and discusses what arguments could be made at the G7 summit.

To summarize, Japan has developed and revised AI-related regulations with the goal of maximizing AI’s positive impact on society, rather than suppressing it out of overestimated risks. The emphasis is on a risk-based, agile, and multistakeholder process, rather than a one-size-fits-all obligation or prohibition. Japan’s approach provides important insights into global trends on AI regulation.

How to address AI’s risks while accelerating beneficial innovation and adoption is one of the most difficult challenges for policymakers, including Group of Seven (G7) leaders.

Japan’s AI Regulations

 

Basic Principles

In 2019, the Japanese government published the Social Principles of Human-Centric AI (Social Principles) as principles for implementing AI in society. The Social Principles set forth three basic philosophies: human dignity, diversity and inclusion, and sustainability. It is important to note that the goal of the Social Principles is not to restrict the use of AI in order to protect these principles but rather to realize them through AI. This corresponds to the structure of the Organization for Economic Cooperation and Development's (OECD) AI Principles, whose first principle is to achieve “inclusive growth, sustainable development, and well-being” through AI.

To achieve these goals, the Social Principles set forth seven principles surrounding AI: (1) human-centric; (2) education/literacy; (3) privacy protection; (4) ensuring security; (5) fair competition; (6) fairness, accountability, and transparency; and (7) innovation. It should be noted that the principles include not only the protective elements of privacy and security but also the principles that guide the active use of AI, such as education, fair competition, and innovation.

Japan’s AI regulatory policy is based on these Social Principles. Its AI regulations can be classified into two categories. (In this paper, “regulation” refers not only to hard law but also to soft law, such as nonbinding guidelines and standards.):

  1. Regulation on AI: Regulations to manage risks associated with AI.
  2. Regulation for AI: Regulatory reform to promote the implementation of AI.

As outlined below, Japan takes a risk-based and soft-law approach to regulation on AI while actively advancing legislative reform from the perspective of regulation for AI.

Regulation on AI

Binding Regulations

Japan has no regulations that generally constrain the use of AI. According to the AI Governance in Japan Ver. 1.1 report published by the Ministry of Economy, Trade, and Industry (METI) in July 2021—which comprehensively describes Japan’s AI regulatory policy ( AI Governance Report)—such “legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment.” This is because regulations face difficulties in keeping up with the speed and complexity of AI innovation. A prescriptive, static, and detailed regulation in this context could stifle innovation. Therefore, the METI report concludes that the government should respect companies’ voluntary efforts for AI governance while providing nonbinding guidance to support or guide such efforts. The guidance should be based on multistakeholder dialogue and be continuously updated in a timely manner. This approach is called “agile governance,” which is Japan’s basic approach to digital governance.

Looking at sector-specific regulations, none prohibit the use of AI per se but rather require businesses to take appropriate measures and disclose information about risks. For example, the Digital Platform Transparency Act imposes requirements on large online malls, app stores, and digital advertising businesses to ensure transparency and fairness in transactions with business users, including the disclosure of key factors determining their search rankings. [2] The Financial Instruments and Exchange Act requires businesses engaging in algorithmic high-speed trading to register with the government and requires them to establish a risk management system and maintain transaction records. From the viewpoint of fair competition, the Japan Fair Trade Commission analyzed the potential risks of cartel and unfair trade to be conducted by algorithms and concluded that most issues could be covered by the existing Antimonopoly Act.

Other Relevant Laws

There are some laws that do not directly legislate AI systems but still remain relevant for AI’s development and use. The Act on the Protection of Personal Information (APPI) describes the key mandatory obligations for organizations that collect, use, or transfer personal information. The latest amendment of the APPI, which came into effect in 2022, introduced the concept of pseudonymized personal data.[3] Since the obligations for handling pseudonymized information are less onerous than those for personal information, this new concept is expected to encourage businesses to use more data for AI development.

If an AI causes damage to a third party, the developer or operator of the AI may be liable in tort under civil law if it is negligent. However, it is difficult to determine who is negligent in each situation because AI output is unpredictable and the causes of the output are difficult to identify.[4] The Product Liability Act reduces the victim’s burden of proof when claiming tort liability, but the act only covers damages arising from tangible objects. Therefore, it may apply to the hardware in which the AI is installed but not to the AI program itself.

There are other relevant regulations and laws that aim to encourage the development and deployment of AI, which will be introduced in the “Regulation for AI” section.

Guidance for Private Parties

As mentioned above, there are no regulations in Japan that directly prohibit the use of AI. However, as also mentioned above, the operator may be held liable for tort or product liability if an accident occurs due to AI systems. In addition, there have been some cases—mainly in the area of privacy—where AI projects have been abandoned due to social criticism and not necessarily because they were in violation of existing regulations. Anticipating these needs, the government provides various tools to help companies voluntarily implement appropriate AI governance measures.

METI’s Governance Guidelines for Implementation of AI Principles summarizes the action targets for implementing the Social Principles and how to achieve them with specific examples. It explains processes to establish and update an AI governance structure in collaboration with stakeholders according to an agile governance framework.

Several guidelines have been published for the protection and utilization of data. The Guidebook on Corporate Governance for Privacy in Digital Transformation and the Guidebook for Utilization of Camera Images, both jointly developed by METI and the Ministry of Internal Affairs and Communications (MIC), provide guidelines on how to handle privacy data in terms of not only complying with the APPI but also taking appropriate measures based on communication with stakeholders.

To promote fair contracts for AI development and data transfer, METI published the Contract Guidelines on Utilization of AI and Data. These guidelines explain key legal issues when entering into contracts for data transfer or AI development, with actual model clauses.

Voluntary Initiatives by Businesses

As governments publish guidance on AI and data governance, some private companies are beginning to take a proactive approach to AI governance. Fujitsu published a practice guide showing the procedures for conducting an AI Ethics Impact Assessment and released examples of its application to representative cases. Sony established the Sony Group AI Ethics Guidelines and has added them to its quality management system. NEC follows the NEC Group AI and Human Rights Principles, which is implemented by the Digital Trust Business Strategy Department.

Tools by Research Institutes

Japanese research institutions also provide various tools to promote AI governance. The National Institute of Advanced Industrial Science and Technology (AIST), administered by METI, provides the Machine Learning Quality Management Guideline, which establishes quality benchmark standards for machine learning-based products or services. It also provides procedural guidance for achieving quality through development process management and system evaluations.

As another example, the Institute for Future Initiatives at the University of Tokyo developed the Risk Chain Model to structure risk factors for AI and is conducting case studies in cooperation with private companies.

Regulation for AI

While the abovementioned regulation on AI aspect is often discussed, the regulation for AI aspect is equally important to maximize AI’s positive impact on society. Legislators in Japan—based on appropriate consideration of the involved risks—have used regulatory reform to promote the use of AI in a variety of contexts.

Regulatory Reform by Sector

In 2020, the revised Road Traffic Act and Road Transport Vehicle Act came into force, allowing Level 3 automated driving (i.e., conditional automation) on public roads. In 2021, Honda became the first manufacturer to provide a legally approved Level 3 car. A new amendment that allows Level 4 automated driving (i.e., high automation) will come into effect on April 1, 2023.

Legislators in Japan—based on appropriate consideration of the involved risks—have used regulatory reform to promote the use of AI in a variety of contexts.

In the financial sector, the Installment Sales Act was revised in 2020 to enable a “certified comprehensive credit purchase intermediary” to determine credit amounts using data and AI. Previously, credit card companies had to use a statutory formula taking into account annual income, family structure, and other factors when assessing credit amounts.

For plant safety, a “Super Certified Operator” system was established in 2017 under the High Pressure Gas Safety Act. Plant operators must stop operations and conduct safety inspections once a year, but operators certified as having advanced safety technology utilizing AI and drones (Super Certified Operators) are allowed to conduct safety inspections without interrupting operations for up to eight years.

The Copyright Act was amended in 2017 to promote the use of data in machine learning. The amendment clarified that downloading or processing data through the internet or other means to develop AI models is not an infringement of copyright. In addition, the 2019 amendment to the Unfair Competition Prevention Act protects shared data with limited access, which typically entails data sets sold for a fee. The unauthorized acquisition or misuse of such data is subject to claims for injunction or damages. These unique provisions will help AI developers use more data for AI learning while protecting the appropriate interests of data holders.

Comprehensive Regulatory Reform—Digital Rincho

Many conventional regulations require a human to conduct visual inspections or be stationed at a business site. However, AI systems should be able to replace such human compliance to some extent. To tackle this, the Digital Rincho (“Rincho” means an ad hoc commission) was established under the cabinet in November 2021. Digital Rincho aims to comprehensively revise regulations that would hinder the use of digital technologies as a means for establishing regulatory compliance. Approximately 5,000 regulations on analog methods and non-AI technologies will be the target of the review, which includes requirements for written documents, on-site inspections, periodic inspections, and full-time stationing.

Summary

On the regulation on AI side, Japan has taken the approach of respecting companies’ voluntary governance and providing nonbinding guidelines to support it, while imposing transparency obligations on some large digital platforms. On the regulation for AI side, Japan is pursuing regulatory reforms that allow AI to be used for positive social impacts and for achieving regulatory objectives. However, it remains to be seen what kind of AI will actually meet the requirements of the regulation. Consideration should be given in light of global standards, and this is international cooperation is needed on AI regulation.

Japan’s Leadership toward International Collaboration on AI Governance

 

Differences and Commonalities Among G7 Countries

As mentioned in the introduction, AI regulation is one of the most challenging topics for G7 leaders. So far, the approach of G7 countries seem to be categorized in two groups.

The first group is trying to take a “holistic and hard-law-based” approach, which sets forth obligations—such as for governance, transparency, and security for at least high-risk AI—with significant sanctions in case of violation (Category 1). France, Germany, and Italy, where the EU AI Act would be applied, can be categorized into this group. Canada, which is proposing AIDA, is also included in this category. The second group takes a “sector-specific and soft-law-based” approach, which seeks to promote appropriate AI governance through nonbinding guidance (rather than through comprehensive AI regulation) while requiring transparency and data protection in some sectors (Category 2). Japan and the United Kingdom fall into this category. The United States is also in this group at this moment but may come closer to Category 1 if the Algorithmic Accountability Act or a similar bill is adopted in Congress.

Considering this gap, the G7 discussion will focus on “What kind of collaboration could be made among countries with different approaches?” Following that, discourse would turn to “What collaboration is possible?”

When each nation’s draft regulations (Category 1) and published guidance (Category 2) are closely examined, there are many similarities among them, with the exception of whether they are legally binding or not.

First, in both cases, the goals to be achieved through AI governance are harmonized with OECD AI Principles (and other related principles, such as G20 AI Principles), including transparency and explainability, privacy, fairness, security and safety, and accountability. It is also recommended (or required) in both categories to provide appropriate information to stakeholders, conduct impact assessments, and keep records of AI operations, at least for certain categories of high-risk AI.

Second, not only in Category 2 countries but also in Category 1 countries, specific regulatory requirements are not yet described in detail, which means there is a lot of room for AI service providers to deliberate good conduct among themselves. Examples of questions they might tackle include: How can they explain accurate logic about algorithms with plain language? How will they evaluate risks which cannot be easily quantified? What technical methods are available to ensure fairness, safety, or privacy? How will the remediation mechanisms and the ultimate burden of responsibility be designed in the event of an accident? These details will need to be designed and updated by each AI system provider or user.

The implementation of such AI governance is not yet legally mandatory for AI service providers whose business is related only to Category 2 countries. However, at least for AI operators with a large market and social impact, considering and implementing appropriate governance and risk-mitigating measures for AI will be essential for market, and society will not accept their services.

Given this situation, it is important for not only the AI service providers who practice AI governance but also the regulators who evaluate the appropriateness of AI governance, as well as market participants and individuals, to have a common understanding of good AI governance practice. Further, given that AI will have both domestic and global impact, it would be desirable for such a common understanding to be developed and adopted by like-minded countries.

Possible Steps for Collaboration

There is a strong case for countries to consider taking actual steps for international cooperation. Such has already begun in various forums. In December 2022, the EU-U.S. Trade and Technology Council (TTC)’s Working Group of AI Standards released a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management calling for greater international cooperation on AI standards. The OECD provides tools and knowledge bases, including the Framework for the Classification of AI Systems. Standardization work is also ongoing within the International Organization for Standardization (ISO), such as the Standardization in the area of Artificial Intelligence (ISO/IEC JTC 1/SC 42). These initiatives are still in the roadmap stage and require various processes before they are actually implemented. The following are possible future steps in international collaboration.

A relatively easy step would be the sharing of AI incidents and best practices among different countries. Like regulations in all other areas, AI regulations need to be implemented based on concrete necessity and proportionality, rather than being deduced from abstract concepts. Therefore, sharing actual examples of what risks have been caused by AI in what areas—and what technical, organizational, and social methods have been effective in overcoming them—will be an important decisionmaking tool for policymakers. For example, the Global Partnership on Artificial Intelligence (GPAI), a multistakeholder initiative housed at the OECD that aims to bridge the gap between theory and practice on AI, is analyzing best practices for the use of climate change data and the use of privacy enhancement technologies. Japan is serving as chair of GPAI in 2022–2023, contributing to this international development of best practices.

Where such best practices can be generalized, international standards could be the next step. Standards would provide AI service providers with insights on good AI governance practices, clarify regulatory content in Category 1 countries, and serve as a basis for responsibility and social evaluation in Category 2 (and also Category 1) countries. For example, the abovementioned TTC agreed to advance standardization for (1) shared terminologies and taxonomies, (2) tools for trustworthy AI and risk management, and (3) the monitoring and measuring of AI risks. In the ISO aspect, Japan has contributed to SC 42 by convening several working groups, including “AI-enabled health informatics” (Joint Working Group 3) and “use case and publications” (Working Group 4).

A more ambitious attempt would be to achieve cross-border interoperability on AI governance. In other words, a mechanism could be introduced whereby a certification (e.g., security certification, type certification) or process (e.g., AI impact assessment, privacy impact assessment) required under regulation or contract in one country can also be used in another country. Although it is premature to discuss the specifics of interoperability at this time since the AI regulations of each country have not yet been adopted, it would be beneficial to promote the case sharing and standardization described above with a view to achieving interoperability in the future.

Toward Agile AI Governance

International cooperation in the form of sharing best practices, establishing shared standards, and ensuring future interoperability may appear to be the typical pattern that has been repeated in various fields in the past. However, some special attention should be paid in the field of AI governance.

First, sufficient AI governance cannot be achieved solely through intergovernmental cooperation. Given the technical complexity of AI systems, as well as the magnitude of AI’s impact on human autonomy and economy (both in positive and negative ways), it is important to have multistakeholder collaboration. Stakeholders include not only experts in technology, law, economics, and management but also individuals and communities as the ultimate beneficiaries of AI governance.

Second, given the speed of evolution for AI technologies, AI governance methods need to be agile and continuously evaluated and updated. In updating, it is necessary not only to consider existing laws and guidance but also to adjust the structure of the regulatory system itself to meet actual needs, such as the extent to which laws should be provided, and what guidance is needed, to tackle actual problems.

The Japanese government has named this multistakeholder and flexible governance process “agile governance” and has positioned it as a fundamental policy for a digitalized society. METI summarizes the overarching concept across three reports published in 2020, 2021, and 2022. Japan’s Governance Guidelines for Implementation of AI Principles and the Guidebook on Corporate Governance for Privacy in Digital Transformation introduced in the previous section are also based on this concept. In addition, the Digital Rincho, a comprehensive regulatory reform organization as mentioned above, has also adopted the agile governance principle as one of its key foundations.

The Japanese government has named this multistakeholder and flexible governance process “agile governance” and has positioned it as a fundamental policy for a digitalized society.

Because of its clear and consistent vision for AI governance and its successful AI regulatory reforms, as well as various initiatives by businesses to create good governance practices and contribute to standard setting, Japan has a promising position to move the G7 collaboration on good AI governance forward.

Hiroki Habuka is a non-resident fellow with the AI Governance Project at the Center for Strategic and International Studies.

This report is made possible by general support to CSIS. No external sponsorship contributed to this report.

Please consult PDF for references.

Image
STP Annex
Image
Hiroki Habuka
Senior Associate (Non-resident), Wadhwani AI Center