After the Virginia AI Bill Was Vetoed, What’s Next for State-Level AI Legislation?

Available Downloads

On March 24, 2025, Virginia’s Republican Governor Glenn Youngkin vetoed House Bill 2094 (VA HB2094), Virginia’s artificial intelligence (AI) bill. The bill had been passed by a narrow majority of 21 to 19 votes in Virginia’s Senate on February 19. Before being vetoed, it was poised to become the United States’ second horizontal state AI bill, following Colorado state law Senate Bill 24-205 (CO SB24-205, or the Colorado Act), which was signed into law on May 17, 2024.

But VA HB2094 came to light under very different political conditions than its predecessor. Indeed, less than one month before its adoption, the then–freshly inaugurated President Donald Trump released his own Executive Order (EO) on AI, just three days after repealing President Biden’s AI EO, which until then had formed most of the basis of U.S. AI policy. As even its title suggests (Removing Barriers to American Leadership in Artificial Intelligence), President Trump’s EO prioritizes innovation and cutting red tape.

Against this background, it was not surprising that VA HB2094 faced stark criticism from the outset. Among its opponents, industry associations such as the U.S. Chamber of Commerce and Chamber of Progress claimed it would create too many obstacles and too much uncertainty, for small businesses in particular. Even some of the consumer organizations that supported the intent of the bill denounced the many loopholes in the text. Some of the staunchest opposition came from Republican-leaning think tanks such as the R-Street Institute, which claimed it “builds on the failed regulatory model that the European Union (EU) uses to regulate digital technology and now AI.”  

Was VA HB2094 really an Americanized EU AI Act? While there are superficial similarities, these are hugely outweighed by differences. This article provides an in-depth comparison between VA HB2094 and the EU AI Act, as while the Virginia law appears dead, a postmortem analysis still offers useful insights into the potential future of state-level AI regulation currently under consideration across the United States, such as Connecticut SB2, New York SBS1169, or New Mexico HB60.
 

An Analysis of Virginia’s AI Bill: Similarities to and Differences from the EU AI Act

A surface-level analysis of VA HB2094’s text displays some similarities with the EU AI Act. However, these are mostly limited to borrowing definitions and concepts, and the differences in obligations and enforcement are far more substantial. The VA bill is, after all, only 6,000 words, while the EU AI Act reaches almost 90,000. A discussion of the five most significant similarities and differences is elaborated below.

Similarity 1: Definitions

The first clear similarity between VA HB2094 and the EU AI Act arises in the definition of “AI system.” This is not due to one copying the other, but rather to both having borrowed the definition from the Organization for Economic Co-operation and Development (OECD), a definition which was approved during the previous Trump administration in May 2019 and revised in November 2023. However, while the EU AI Act uses all elements of the OECD definition, VA HB2094 (and the Colorado bill) leaves out an important element: “Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” Though this is a small omission, it could have significant consequences by expanding regulatory scope to include a broader range of simpler systems that might not be strictly AI, but would fall into the wider category of automated decisionmaking systems. In short, the Colorado and Virginia bills risk (or would have risked) exposing more companies to more regulatory burden—even in cases where they are using decades-old technology, not cutting-edge AI.

Both VA HB2094 and CO SB24-205 express that a core characteristic of AI systems is the ability to infer. This is also reflected in the EU AI Act: “The definition should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer.”

However, the EU AI Act differs from the state laws by going on to explain that a system with no autonomy is not considered AI. This is further elaborated in the (nonbinding) guidelines put forth by the EU Commission in February 2025 on how to interpret the definition: “Some systems have the capacity to infer in a narrow manner but may nevertheless fall outside of the scope of the AI system definition because of their limited capacity to analyse patterns and adjust autonomously their output.”

The guidelines then go on to list examples of such narrower systems: systems for improving mathematical optimization, basic data processing, systems based on classical heuristics, or simple prediction systems. While the guidelines have sparked criticism in Europe for being too narrow, the opposite can be said of the AI definition in both the Virginia bill and the Colorado Act. Indeed, these bills would both include all of the systems listed above, which the EU AI Act might exclude. The concept of autonomy is key to ensuring that only more capable (and therefore, presumably, more risky) AI techniques are covered by law.

Another notable similarity in definitions pertains to “general-purpose AI models.” In this case, all core elements present in the EU AI Act definition are taken onboard in the Virginia bill: the fact of displaying significant generality, the capability to perform a wide range of distinct tasks, and the possibility to be integrated into a variety of downstream applications. Similarly to the EU AI Act, VA HB2094 also specifies that the bill does not cover models that are used for research and development activities before they are released on the market.

Similarity 2: Risk-Based Approach and Classification

Both the EU AI Act and VA HB2094 take a risk-based approach, meaning that regulatory scope is based on AI use cases and that regulatory requirements differ according to the use case risk category. Using a risk-based approach does not mean that the VA bill is excessively regulatory; indeed, industry has consistently maintained that AI regulation should be risk-based. It is also worth specifying that, while the EU AI Act introduces prohibited uses of AI, VA HB2094 only focuses on high-risk AI systems.

The more meaningful similarity with the AI Act lies in the classification criteria VA HB2094 uses to mark an AI system as “high-risk.” As the Virginia bill’s obligations only apply to high-risk artificial intelligence developers and deployers, determining what AI systems are considered high-risk is key to defining the scope of the bill. VA HB2094 defines them as “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.” It is worth noting that this mention of “autonomy,” without a definition of the concept, does not solve the risk of a too-broad definition of AI system described above. Even more interesting, though, is the sentence describing what is not a high-risk AI system. Here the bill lists the exact same four conditions that the EU AI Act does, defining that an AI system should not be considered high-risk when it is intended to

  • perform a narrow procedural task;
  • improve the result of a previously completed human activity;
  • detect decisionmaking patterns or any deviations from them; or
  • perform a preparatory task to a consequential decision.
     

This is where VA HB2094 is most similar to the EU AI Act: in defining what is not high-risk. The bill goes even further than the Colorado Act, which only lists the first two of those conditions, meaning that more AI systems would have been excluded from VA HB2094 than from the Colorado Act.

Some similarities between the EU AI Act and VA HB2094 can also be found in their discussion of which use cases are considered high-risk for AI involvement. In the EU AI Act, these use cases are listed in two distinct annexes (one for regulated products, listing 20 types of products, and one for areas where fundamental rights are at stake, containing eight areas and more than 40 use cases). Conversely, in the Virginia bill, high-risk systems are strictly linked to the concept of a “consequential decision,” meaning any decision that has a “material legal, or similarly significant effect” on consumers. The list in this case consists of only nine use cases, ranging from parole status to access to housing or employment. Some of these coincide with EU use cases (e.g., access to education, employment, financial services, healthcare, housing, or insurance), while others only partially overlap (e.g., access to parole or to a legal service).

It is also worth noting that the concept of “consequential decision” on which the bill revolves is largely mirroring a concept taken from another piece of EU legislation, the EU General Data Protection Regulation (GDPR), whose article on automated decisionmaking starts by stating: “The data subject shall have the right not to be subject to a decision based solely on automated processing . . . which produces legal effects concerning him or her or similarly significantly affects him or her.” The European Union itself did not resort to this concept in formulating its AI Act, although there were several attempts to include it during negotiations. And yet, a U.S. bill from a Republican-leaning state would have taken onboard a concept typical of GDPR (and its predecessors), despite this framework being one of the regulations Vice President JD Vance criticized during the AI Action Summit in Paris.  

In sum, the Virginia bill notably repeats the EU AI Act’s risk-based approach, but it defines far fewer use cases as meeting the high-risk definition.

Similarity 3: Requirement to Label AI-Generated Content

An interesting similarity regards generative AI, for which VA HB2094 provides a definition (although the EU AI Act does not). Both bills mandate that AI developers (called “providers” under the EU AI Act) ensure that synthetic content is marked as such. Even the specifications for artistic content are the same, and exceptions to this obligation are very similar: The obligation does not apply to text generated to inform the public on matters of public interest, to systems that perform an assistive function in standard editing, or to systems used by law enforcement. In this area, the Virginia bill’s drafters clearly appear to have cribbed from the EU AI Act.

Similarity 4: Role of Standards

A key concept that the EU AI Act uses and the Virginia bill replicates is the reference to a standard providing a presumption of conformity to the operators implementing it. In short, the bill says: If providers abide by one of these (mostly industry-led) standards, the authorities will operate as though they are fully compliant with the relevant parts of the law. Just like the Colorado Act, VA HB2094 mentions specifically the AI risk management framework (RMF), developed by the National Institute of Standards and Technology (NIST), ISO/IEC 42001, jointly developed by the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC), or other equivalent standards. However, VA HB2094 goes further than the Colorado Act in stating that “high-risk AI systems that are in conformity with the latest version of [such standards] . . . shall be presumed to be in conformity with related requirements set out in this section.”  

This is arguably another business-friendly measure, as referring to standards and giving them a near-legal value allows companies to follow industry best practices and protocols of their choice as a simplified path to compliance. The key difference with the EU AI Act, in this case, is that the mentioned standards can only cover a very small number (if any) of the requirements the EU AI Act introduces, as will be explained further in this article. By contrast, VA HB2094 uses industry standards as a compliance mechanism for most of the main aspects of the bill.

Similarity 5: Substantial Modification

A concept that can also be found in the EU AI Act is that of “substantial modification” of an AI system or a general-purpose AI model. This pertains to what degree of change triggers new risks and perhaps new regulatory requirements. In cases where the changes are performed by a different actor than the original AI system developer, this concept is critical to assess who faces what legal obligations.

As a starting point, VA HB2094 adds to the definition of substantial modification the qualifier “intentional.” Intentionality is normally an industry-friendly concept, as intention must be proven in court, which is not at all easy for litigious consumers to do. In VA HB2094, a substantial modification must be deliberate and result in a new risk for algorithmic discrimination. Therefore, this excludes any simple modification which does not lead to a new risk, and it also excludes the normal effects of the AI system’s learning process which are not predetermined by the developer.

According to the rest of the bill’s text, a substantial modification triggers new obligations: if it is made by the developer, then the developer must update its documentation, while if it is made by the deployer, then the latter must comply with the same obligations as the developer. In other words, in this case, the deployer becomes a developer for the purpose of the law. Similarly, the EU AI Act defines a substantial modification as a change that both “is not foreseen or planned” (i.e., was not included in the original conformity assessment) and affects the system’s compliance. As in VA HB2094, this does not include predetermined changes or changes “occurring to the algorithm and the performance of AI systems which continue to learn.”

In the EU AI Act, similar to VA HB2094, a substantial modification triggers obligations for any operator (be it a provider, a deployer or, in this case, an importer, distributor, or any other third party). However, the circumstances that count as a “substantial modification” are formulated slightly differently from those in VA HB2094: First, any modification to a system that is already high-risk, and second, is a modification of the intended purpose of an AI system (including a general-purpose AI system) which was not high-risk but that makes it high-risk. The EU AI Act continues by explaining that, in these cases (or in case an operator places the same AI system under its own trademark), the operator concerned becomes a provider and must comply with all related obligations.

Difference 1: Product Safety

Despite a few similarities in concepts, the differences between VA HB2094 and the EU AI Act are much more significant. The main difference is that the EU AI Act is focused on product safety, unlike VA HB2094, which is a law on algorithmic decisionmaking; in other words, the EU AI Act regulates both products and organizations, whereas the VA bill only regulates organizations. This means that while the EU AI Act mandates that every AI system (which is considered a “product”) must be compliant in and of itself with the law, in VA HB2094, only developers and deployers have to comply, not the way the AI system itself is built. The difference is a massive one in scope and regulatory burden. For example, if a company develops 10 different high-risk AI systems, all 10 of them need to meet the technical requirements mandated by the EU AI Act: All 10 need to enable log recording, have their individual technical documentation, be built with specific data governance measures, be accurate, robust, meet cybersecurity requirements, and so on. On top of that, the company also has several obligations with which it must comply. In contrast, under VA HB2094, that same company would only have to comply with developer obligations (see below).

Difference 2: Allocation and Scope of Responsibilities

A second major difference, also stemming from the varying nature of the two laws, is that whereas the EU AI Act places most of the burden on providers (“developers” in VA HB2094 terminology) the Virginia bill places a higher burden on deployers using high-risk AI. Proof of this is that both developers and deployers are incentivized to make use of the same standards (as mentioned above, NIST RMF, ISO/IEC 42001, or equivalent) to prove compliance with most obligations, and deployers also have additional disclosure obligations on top of that.

To be clear, the EU AI Act foresees the possibility of using standards only for providers (who carry by far the most responsibilities); Figure 1 shows how the balance between burdens for providers/developers and deployers is different from the EU AI Act to VA HB2094. While per the EU AI Act, providers have to comply with nine requirements (for the AI system) and ten obligations (for themselves), against four obligations for deployers, in VA HB2094 developers only have to comply with two obligations, whereas deployers have three.

Remote Visualization

Giving deployers more responsibilities than developers was an option that was also discussed during negotiations for the EU AI Act. Typically, stakeholders advocating for this option belonged to industry, in particular big tech companies, whose argument at the time was that deployers were best suited to assess the typical socio-technical impacts in a deployment context. For big tech companies (who would most often be providers), this was a convenient argument to obtain fewer responsibilities. However, the proposal also found sympathizers in nongovernmental organizations advocating for human rights, as well as among the center-left parties on the EU negotiating team, and the proposal ultimately resulted in the addition of a fundamental rights impact assessment to the obligations of (some) deployers. Notwithstanding this light addition, in the EU AI Act, the rules remain mostly focused on providers and the way they develop high-risk AI systems.

VA HB2094, in contrast, seems to be putting more burden on whoever uses that AI system. Indeed, in addition to what developers have to do, deployers must also produce an AI impact assessment and inform consumers that they are using a high-risk AI system to make a consequential decision. In case the AI impact is negative, the deployer should also provide the consumers with an explanation (a right that is also introduced in the EU AI Act) and give them the right to appeal the decision (not present in the EU AI Act).

In short, the EU AI Act is far tougher on big tech AI providers, whereas the Virginia bill would have focused more of its regulatory burden on deployers. Given that most deployers will be organizations like schools, small companies, police stations, and so on, other states drafting similar AI bills should carefully consider the balance of burden and impact. Imagine, for instance, a public school trying to implement the highly technical fifty-page ISO/IEC 42001 document.

Difference 3: Extent of Obligations

In addition to being fewer in number compared to the EU AI Act, the obligations contained in VA HB2094 are lighter in scope. As explained, the bill refers to standards such as the NIST RMF or ISO/IEC 42001 as providing a presumption for conformity. Conversely, in the context of the EU AI Act, ISO/IEC 42001 can only be seen as partially covering the obligation to implement a quality management system (number 10 in the first column of Figure 1). Indeed, the European standardization bodies currently tasked with drafting the future “harmonized” standards for the EU AI Act are currently working to augment that standard with a specific integration, and members of the relevant standardization bodies told CSIS that European standardization bodies are not even sure that taking ISO/IEC 42001 as a basis for that single specific obligation will work.

The bulk of obligations for both developers and deployers in VA HB2094 seems to be about disclosure of information: information about the management system, information to consumers, and information about AI-generated content. Also, only developers and deployers are even mentioned in the bill, whereas the EU AI Act introduces responsibilities for other operators in the value chain: authorized representatives of providers, importers, and distributors. These operators mostly have to check that the AI systems they distribute or import are compliant with the EU AI Act and possess the necessary CE marking (the EU conformity label) and technical documentation. In short, the EU AI Act requires far more obligations from a far more diverse set of actors.

Difference 4: Exemptions

The Virginia bill has two major sections providing exemptions to various AI use cases and sectors.

Non-High-Risk Systems

First, after specifying under what criteria systems are not considered high-risk (see similarity two), it lists 19 further systems that are by default excluded from the obligations of the bill, from video games to antivirus software. The bill also exempts autonomous vehicles—which are considered high-risk in the EU AI Act.

Exempted Sectors

In addition to the list of exempted systems, VA HB2094 also elaborates exempted sectors and actions. In some cases, these resemble the exemptions in the EU AI Act, such as in the case of financial services. This specific case could be considered a similarity, in that both VA HB2094 and the EU AI Act largely refer to existing sectoral regulations instead of adding too many new rules on top of them.

Exempted Actions

VA HB2094 goes much further than the EU AI Act by stating that nothing in the bill should be construed as preventing the developer or deployer’s ability to perform certain actions. While some of these actions can be considered legitimate, such as cooperating with law enforcement authorities in good faith during an investigation, others can lead to very broad interpretation and therefore possible loopholes, such as, “Take any action that is in the public interest in the areas of public health, community health, or population health, but solely to the extent that such action is subject to suitable and specific measures to safeguard the public,” or, “Perform internal operations that are reasonably aligned with the expectations of the consumer or reasonably anticipated based on the consumer’s existing relationship with the developer or deployer.”

The assumptions in these cases can be extremely broad as to what constitutes operations “reasonably aligned with the expectations of the consumer” in the latter example, or what constitutes “any action that is in the public interest” in the first.

The extent of the overall exemptions leads to the conclusion that, in practice, even fewer systems than one might have expected from the mere nine high-risk use cases would have been covered by the Virginia bill. This is another reason why the bill seems much more industry-friendly than the EU AI Act.

Difference 5: Enforcement

The last major difference from the EU AI Act relates to enforcement. VA HB2094 entrusts the attorney general (AG) with the overall enforcement of the rules. Conversely, the EU AI Act relies on an already codified system of national market surveillance authorities and certification bodies. Market surveillance authorities have extensive investigative and corrective powers, from showing up unannounced at a company’s premises to withdrawing an AI system from the EU market. EU member states have flexibility to appoint any number of these authorities among preexisting supervisory authorities of different kinds (telecoms, privacy, cybersecurity, traditional market surveillance for regulated products, etc.) or establishing new ones. On top of this supervisory system, the national judicial authority remains accessible for further issues of consumer harm or for appeals against a market surveillance authority’s decision adversely affecting a company. This is, therefore, a visibly more complex and layered enforcement system than just entrusting the overall enforcement to the AG, as VA HB2094 does.  

Directly related to this is another major difference between the two bills: the size of financial penalties. Whereas the EU AI Act foresees violations of operators’ high-risk obligations as triggering fines up to €15 million, in VA HB2094 the fines vary between $1,000 and $10,000—and this is for intentional violations, with an even smaller fine foreseen for nonintentional ones. One does not need a calculator to have a very clear picture of how different the consequences are for operators violating the rules of the respective bills. The maximum penalty of $10,000 is a minuscule amount for a company such as OpenAI; even when taking into account repeat violations, the disproportion between EU and Virginia fines appears enormous.
 

Recommendations for State Legislators

After analyzing the main similarities and differences between the EU AI Act and the now defunct VA HB2094, two main recommendations emerge to fix problems that might emerge in the application of future bills, should they come to light.   

First, state lawmakers should use the entire revised version of the OECD definition of “AI system.” This would avoid unintentionally covering too many less-sophisticated systems, while at the same time fully aligning with international terminology (the OECD and EU AI Act in particular). Alternatively, the concept of autonomy should clearly be defined in the language of the bill, in order to avoid confusion when defining high-risk AI systems.

Second, state lawmakers should carefully consider the burden of responsibilities between developers and deployers. Deployers are usually smaller, less-experienced actors in relation to AI systems, with fewer resources than tech companies. Giving them more obligations than developers—or even suggesting the same standards for both—seems out of proportion. Rather, working on simplified compliance for AI management and impact assessments would be a welcome relief for deployers while still ensuring that they deploy any high-risk AI system responsibly and with due care.

Conclusion

Unsurprisingly, Governor Youngkin decided to veto VA HB2094. He did so not because the bill was a Virginian version of the EU AI Act, as this paper has shown that it was not. The true reason lies in the state’s Republican majority and the current stark antiregulation trend led by the Trump administration.

Indeed, the main similarities between the bill and the EU AI Act are largely conceptual. Also, they largely end up having a pro-business impact, in particular the presumption of conformity and the criteria by which an AI system is deemed not to be high-risk. But the much broader exemptions, the very few obligations, and the extremely weak fines in VA HB2094 make it substantially different from the EU AI Act and further reinforce the feeling that the bill would have been far more industry-friendly than the EU AI Act.

Whatever the fate of the state bills on algorithmic discrimination currently on the table (such as Connecticut SB2, New York SBS1169, or New Mexico HB60), it needs to be made clear that for the most part these are only proposing very light obligations compared to the EU AI Act. Stating the opposite and sounding the alarm for companies is simply factually incorrect. Rather, it should be made clear that the United States, including at state level, currently prefers to actively pursue AI development and adoption, setting out very light responsibilities for companies and minimal protections for citizens. That is a legitimate choice, but it is the opposite of what Europe strove to do with its AI regulation.

Laura Caroli is the senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).

This report is made possible by general support to CSIS. No direct sponsorship contributed to this report.