Adopting AI in the Workplace: The EEOC's Approach to Governance
![Photo: Andrew Harrer/Bloomberg via Getty Images](https://csis-website-prod.s3.amazonaws.com/s3fs-public/styles/500_x_300/s3/2024-05/GettyImages-1201696534_cropped.jpg?VersionId=5rlhyZ1Cig4LOEJ5.yWir2Oj1A51nNS4&h=925485fa&itok=lRJqtPtI)
Photo: Andrew Harrer/Bloomberg via Getty Images
Available Downloads
This transcript is from a CSIS event hosted on May 30, 2024. Watch the full video here.
Gregory C. Allen: Good afternoon. I’m Gregory Allen, the director of the Wadhwani Center for AI and Advanced Technologies here at the Center for Strategic and International Studies, CSIS.
Today we’ve got a conversation about one of the hottest topics in AI regulation and governance, which is the role of AI in the workplace and employment. And to discuss this important issue, we’ve got somebody who is exquisitely well-positioned to talk about these topics, which is Keith Sonderling, a commissioner at the Equal Employment Opportunity Commission, one of the premier civil rights law organizations in the United States government.
Commissioner Sonderling, thank you so much for joining me today.
Keith Sonderling: Thank you for having me.
Mr. Allen: We’re going to launch into the meat of what is exactly is going on in terms of the use of AI and its implications for employment law. But before we do that, I want to hear a little bit about you and how you came to be interested in these issues. So how did you come to focus on AI and employment law?
Commissioner Sonderling: Well, you know, first of all, thank you again for having me. It’s so nice to be here in a different area where somebody with my expertise doesn’t normally get to speak to you. But as you’ll hear in this discussion, you know, it’s all coming together.
So I’m Keith Sonderling, commissioner of the United States Equal Employment Opportunity Commission. Before being at the EEOC, I was a labor employment lawyer. And then I joined the government in 2017 at the U.S. Department of Labor, and then was confirmed to the EEOC in September of 2020, where I currently serve as a commissioner.
And you know, you say EEOC, well, what is that? What is that agency? So we are a civil rights law enforcement agency responsible for enforcing all workplace discrimination laws. So our mission is to prevent and remedy employment discrimination, which is the law enforcement side, but also to promote equal employment opportunity in the workplace. So when you think of the big-ticket items that impact the workforce, that impact HR departments, whether it be the MeToo movement; diversity, equity, and inclusion; pay; religious discrimination; age discrimination; all those issues, that’s my agency and those are the laws we enforce.
And you know, as you alluded to in the intro – which I appreciate – you know, I say we are the premier global civil rights agency for the workforce. You know, our agency was born out of the civil rights movement in the 1960s. We came – we’re part of Title VII of the Civil Rights Act. So we have a very, very important mission, which is to ensure that workers are able to enter and thrive in the workforce and not be discriminated against.
And now you must be saying, well, what in the world does that have to do with technology? (Laughs.)
Mr. Allen: Yeah.
Commissioner Sonderling: So, you know, when I got to the EEOC, there was just always something going on. Especially in the middle of COVID, we were dealing with COVID vaccines and exemptions to that and return to office.
Mr. Allen: And then return to work – yeah, of course.
Commissioner Sonderling: Everything that impacts, you know, HR departments. And it’s so easy at these agencies, you know, to be distracted by what’s going on in the news. If you look at the EEOC, the issues that have been going on in employment discrimination, you had the MeToo movement. All of the resources and energy need to go there. Then you had issues with the women’s soccer team pay getting global attention, so then we have to focus on pay equity. Then COVID and accommodations. Well, I wanted to say, well, what is the future? You know, what are the big issues coming forward in HR in the workforce?
And that’s when I learned about AI, and that’s when I learned about technology. And as a labor and employment lawyer myself, I didn’t really know much about it. So, like many people, I thought, well, this must be about robots replacing human workers. This must be that dystopian vision of robot armies doing the work in factories. And you know, that’s not going to really impact the entire workforce because our work impacts all industries in the entire U.S. workforce.
But then when I dove into it, I really learned it’s much more about just replacing humans in factories; it’s about using AI to actually make employment decisions. And what I found was that there was technology out there being used from A to Z of the employment relationship. So think about everything how you interact with your employer, from the very beginning of making your resume, for an employer making a job description, for advertising, for seeking resumes, for reviewing the resumes, for determining who to interview, to actually conducting the interview by AI, to actually then making the decision to hire the person, what to pay the person; then when they get hired, determining where in the organization they should sit and who they may interact with well better than others, to determining what assignments they’re going to get, to determining how they’re doing in their workforce or performance management; and there’s even software out there that will tell employees that they’re fired. And this wasn’t some futuristic, you know, potential development and use of AI; it was – is being used now across the board.
Mr. Allen: So really every function of an HR department in a company someone out there is either using or claiming to use AI to assist or perform that function. And of course, that’s where your agency gets involved because if any of those functions – hiring, firing, promoting, demoting, you know, offering training resources or other resources – if any of those things are done in a way that is discriminatory, the EEOC cares and wants to know about it, right?
Commissioner Sonderling: That’s what we care about.
Mr. Allen: Yeah.
Commissioner Sonderling: And here’s the crux of all of this and why this technology’s being developed and deployed at this rate, is that the technology promises to do it better than humans. And what has been the biggest problem within HR?
Mr. Allen: Biased humans.
Commissioner Sonderling: Biased humans, exactly.
Mr. Allen: Yes.
Commissioner Sonderling: And you know, this is where AI comes into play, saying, well, we can design, develop, and deploy AI in a fashion without bias; you’ll have better employment decision-making. You know, so many employers want to go through a skills-based approach to hiring, you know, moving away from resumes, moving away from some of the traditional ways people entered the workforce, and actually go to the actual underlying skill that’s necessary for that job and that location. And AI has the promise to do that by removing some of the normal issues with human hiring that the human would see.
You know, in my world there’s longstanding studies about how bias exists – obviously, because, you know, in the last two years my agency’s collected over $1.2 billion from employers who were violating these laws, and that’s why we got 80,000 new cases this past fiscal year. So the problem is out there. And some of the very basic problems can be solved by AI. Like, you know, a male and a female resume apply for the same job, the male is more likely to get selected. African Americans and Asian Americans who whiten their resume by deleting any references to their race or where they’re from are more likely to get selected without those on there. So those are just a lot of long, very simple –
Mr. Allen: That’s the – that’s the background context of the U.S. hiring market before AI, or is this after?
Commissioner Sonderling: Yes. There’s very, very simple, longstanding issues with entering the workforce. And also, too, you know, when you walk into an interview and you see somebody, you see everything about that person that the EEOC has said you are not allowed to make an employment decision – their sex, their national origin, potentially their religion, if they’re disabled, if they’re pregnant. And that – there’s been so many people who have been unable to even get past the first round of the interviews, because that bias has come into play, because humans have seen that. And that’s where if AI – and I qualify this every time, and this is really important for this discussion – if it’s properly designed and carefully used, it can actually help eliminate that bias. By how? By designing the AI to not look at any of those indications in a resume that may show a protected class – that may show if it’s male or female, that may show their age, their national origin – and actually look to what their skills are, and not having any of those other factors.
You know, a lot of companies, especially for their first round, are using chatbots or, you know, you’re using your app on your phone to do the first round of interview using natural language processing to actually determine the order of the words you’re using to see if you have the skills to do the job. What does that eliminate? Seeing the person. You know, seeing if they’re a male or female. Seeing what the color of their skin is. Seeing if they’re disabled. And it actually allows them to get in the workforce based upon how they respond to their questions. And that’s how you get to a nonbiased, not discriminatory hiring process – if it’s carefully designed and if it’s properly used. But for each and every use of AI, if it’s not – if you flip that; if it’s not properly used, if it’s not carefully designed – these tools can have those factors play an unlawful role in the hiring decision and scale discrimination far greater than any human being.
So that’s why we really need to break down what it means to have these tools actually not have bias, and actually have that approach where it’s going to allow individuals to get in the workforce and not be discriminated against like they were potentially by humans.
Mr. Allen: So this is interesting because we’ve been hearing for many years that one of the greatest concerns about artificial intelligence and machine learning is that it can be based upon biased data, and therefore can be a part of making biased decisions. But what you’re saying is you also envision a future in which AI can be used to reduce bias. So it’s not that you’re against AI here, but it’s that you’re trying to caution the folks who are building these AI HR software tools and the folks who are deploying or using these AI HR software tools to actually ensure that they’re done the right way.
Now, is there a scenario – you’ve walked us through a few scenarios of what this looks like when perhaps it goes wrong. Is there something that you understand to be the case where this is already being done well? Is there sort of an example that you can – you can point to, or are we still in the early stages here?
Commissioner Sonderling: Well, we’re still in the early stages. There’s a lot of companies slowly starting to integrate these programs knowing about some of the potential issues.
Mr. Allen: Because I’ve heard from companies before that they say that they are reluctant to incorporate artificial intelligence in some circumstances because they don’t exactly know how to do diligent – do due diligence on whether or not the AI system is actually introducing bias. And I think EEOC in this regard is seeking to issue guidance or other measures to try and make it a little bit easier for companies to understand.
Commissioner Sonderling: Absolutely. And this is where it gets complicated, and this is where I’ve been trying to simplify it, and saying, you know, at the end of the day all these AI tools are doing is they’re either making an employment decision or they’re assisting employers making an employment decision. And there’s only a finite amount of employment decisions that are out there – hiring, firing, promotion, wages, training, benefits, right? And corporations are already making those decisions, whether they’re using AI and whether – or whether they’re using humans. And they already have practices, policies, and procedures on how to make those decisions. And I think a lot of the confusion in the market is saying, well, now that we’re using AI, all of that goes out the window because we don’t understand how algorithms work.
And that’s where I’ve been trying to shift the narrative and saying, well, you don’t have to be, you know, an expert in machine learning and artificial intelligence to understand how these tools are going to work. Because as you know, at the federal government, our investigators, they know employment decisions and they know whether to look to see if an employment decision was lawful or not. And that’s the same way we’re going to judge an employment decision whether it’s made by a human or whether it’s made by a robot. But I’ve been arguing, to your point, if you flip it, this actually may allow employment decisions to be made in a more transparent and defensible way.
Mr. Allen: That’s super interesting, and there’s one thing I want to touch on there that I think is really important for folks to understand. There’s a lot of debate out there right now about whether or not this new technology demands new regulations. And I think what you’re pointing out here is that the toolbox that you were given literally by the Civil Rights Act in the 1960s –
Commissioner Sonderling: 1964.
Mr. Allen: – is a pretty great toolbox and it’s technology agnostic.
Commissioner Sonderling: Yes.
Mr. Allen: The things that are bad discriminating with one tool is still bad discriminating with a new tool.
Commissioner Sonderling: And that’s what we can’t lose sight of. And you know, at our agency, we have very strong civil rights laws that we enforce, whether it’s preventing age discrimination, disability discrimination, you name it. And again, those apply to an employment decision, and that’s all an employer can do with these AI tools. But at the end of the day, there’s going to be an employment decision, and was there a bias within that decision is what we’re going to look for. And people are losing sight of that, saying – and we’ll get into whether we need new laws and, you know, some of the proposals out there. But at the end of the day, you know, our laws may be old, but they’re not outdated. And they’re going to apply to employers’ use of – whether you use AI or not.
And that’s what, really, we’ve been trying to do at the EEOC by reminding the developers, the investors, the users, and then, ultimately, the consumers – the consumers in our space are the employees and applicants – that all the civil rights protections apply here that have been applying to all other types of employment decisions. Now, that might not be as exciting as, you know, hey, we need a new framework around this, but that’s the confines where we’re in. And it’s very easy for us to say, well, here’s our longstanding employment laws, and here’s how it’s going to apply with a worker with disabilities who needs to be able to use these tools, and how are they going to use these tools, you know, with an accommodation in light of their disability, how are these tools going to ensure that they’re not screened out based upon their disability or, you know, any other type of area.
Because if these tools are looking to make an assessment, are you answering these questions properly? Are we looking for these skills? Well, that’s just like any other kind of assessment an employer is doing. Now it's scaled, and it’s using technology, and it became faster and more efficient than ever before, but at the end of the day we’re going to be looking at: What are you asking the algorithm to do? What skill are you looking for? And is that relevant to the job at hand, or is it causing discrimination? And that’s the case for all employment law, and that’s the case for using AI or using a test on a – with a pencil and paper, and that doesn’t change.
And that’s where we’re trying to bring it back to so people can actually use these systems. If not, they’re going to be stuck with the – with what they have been doing, which is using humans to make these decisions, and then not moving forward with technology when they already have that framework from the law and their own governance in place when it comes to HR decisions.
Mr. Allen: So in terms of regulatory authorities, you said that the set of authorities and the toolbox that you’ve got is old but it’s not outdated; that’s really good. But what about – what does need to change in the case of the EEOC? So in 2021, EEOC Chair Charlotte Burrows launched this new initiative around artificial intelligence and, you know, laid out a series of steps that the EEOC wanted to take in order to get ahead of a lot of these issues. So if the authorities aren’t what needs to change, you know, what did need to change? And how much progress have you made?
Commissioner Sonderling: Well, whether the authorities need to change, you know, obviously, I’m in the executive branch and that’s up to Congress, and if they’re going to give us more tools or more money to –
Mr. Allen: You’re probably not going to complain, yeah.
Commissioner Sonderling: – enforce AI, you know, we will do what we’re told. And this is just the confines of what we’re dealing with.
So what can we do within that space? And we can, you know, have public hearings. We can get input from various stakeholders. You know, I personally meet with everyone involved in this equation, whether it’s the investors – you know, the investors don’t want to invest in products that are going to violate civil rights. The developers, you know, who largely don’t come from the HR space because they’re computer scientists that can develop this –
Mr. Allen: More of the tech world.
Commissioner Sonderling: The tech world. They don’t want to develop a product that’s going to violate civil rights. And then you have, in the middle, the employers who need to buy the product. You know, they don’t want to buy a product that’s going to violate civil rights. And then, ultimately, the employees who are being subjected to this. So it’s a bunch of different stakeholders who all speak different languages, and many of them – especially on the frontend side – have never dealt with the EEOC before, have never played in this employment world before. But technology is really, you know, front and center now there.
So as part of this initiative, we’re really trying to talk to everyone, to meet with everyone, to go to places we’ve never been before, because this technology in HR is industry-agnostic. It’s going to impact every single industry across the board that has employees across the world. So that’s really getting that kind of input from all these different stakeholders and being able to put out guidance, whether it’s, like I said earlier, how disabled workers are going to be able to use these tools, how to use these to actually do bias audits to ensure that these tools don’t discriminate, again, based upon our standards from the 1970s.
So that’s what we’ve really been trying to do, is saying here’s all the different kinds of employment decisions you can make – which there’s a finite amount of – and here’s how the EEOC looks at them, how humans have been making them, and here’s how technology’s going to impact that. And here’s those added layers which you need to be looking at to do it in a transparent, fair, and nondiscriminatory way. And that’s what we’ve really been leading on before, you know, the executive order, before a lot of these other parts of the government have started talking about this, just because it’s so impactful.
Mr. Allen: Yeah.
Commissioner Sonderling: And I’m really passionate about this because unlike other uses of AI which I know you discuss – about making shipping routes faster, you know, making widgets quicker – here you’re dealing with people’s livelihoods. You’re dealing with the ability to enter and thrive in the workforce and provide by their – for their families. So if they’re discriminated against by an algorithm at scale based upon a protected characteristic, it could really have very harmful implications for people’s lives. So we have to be very careful in making sure that employers can use this in a fair and transparent way, and actually get to where all employers want to be: hiring people for the right reasons and not based on the wrong reasons.
Mr. Allen: So a lot of – you know, you talked about all these different stakeholders, whether it’s the tech developers, the HR offices, or the employees, who are all sort of asking: What does this mean for me? How is this actually going to work? We have these old but good laws; how are they actually going to be implemented? How are they going to be interpreted by the EEOC or by a judge in a courtroom, potentially?
So could you just walk us through, like, an example here? Who is responsible if AI is used in a discriminatory employment decision? Like, how does liability work for these kind of tools?
Commissioner Sonderling: You know, it’s interesting in our – in our space, because unlike other areas, you know, the vendors have very limited to no liability in this space. And that’s –
Mr. Allen: Interesting.
Commissioner Sonderling: That’s an area that’s still being tested. It’s still so novel. So, you know, we may see changes in that. But –
Mr. Allen: So whereas, you know, an automaker, if they make a car and the car has a part break because of a manufacturing defect, the manufacturer might be the one liable in that story. But what you’re saying is in AI software the vendor who is providing the software might not be liable and the person who is deploying that software – making HR decisions with that software – might be the one who’s liable.
Commissioner Sonderling: 100 percent, and let me explain why it’s different. Because under the U.S. civil rights laws in employment, only three parties can make an employment decision: companies, unions, and staffing agencies. That’s it. That’s our –
Mr. Allen: That’s the universe. That’s the whole universe, yeah.
Commissioner Sonderling: That’s our universe. That’s our world. So, you know, if you make a discriminatory employment decision and you’re saying, well, I don’t know, we hired a software vendor to make that decision for us – and we don’t know what data they used, we don’t know what algorithm they used – you know, from our perspective none of that matters because under the law only your company can hire or fire somebody. Only your company can discriminate somebody, demote somebody, not pay them fairly. And that’s what’s so different in the employment space, and that’s why there needs to be so much more diligence for the people who are using and buying and developing these systems, because that liability is going to rest with the employer and nobody else. And that’s just the way employment law is in the United States.
Mr. Allen: So if an employer, you know, sees some marketing, sees an ad for some kind of AI HR software and it has, like, a little checkmark that says bias free, it is on the employer to know whether or not that claim is true and under what circumstances specifically that is true.
Commissioner Sonderling: A hundred percent, and let me tell you how specific it is. And you know, a lot of the vendors will do pre-deployment audit testing, which is a great thing to do, and they’ll show how their tool has less bias than humans doing that same kind of employment decision. But from the EEOC’s perspective, we only care how that tool is being used on that one job description in that one part of the country, you know, for that one specific use – and think about how many thousands of job descriptions a company has – to see if it has a discriminatory impact on what it’s looking for. So, you know, you’re using an AI tool to recruit, we’re going to look about, you know, what the applicant flow is, the diversity of that applicant flow, and then what you’re asking the skill for that AI to look for for that specific job, and is it relevant to the job or is it causing discrimination. And you have to do your own test to see if there’s bias there or not per job. So you see how complex it is and how different that is than other software.
Mr. Allen: You’re talking a lot about tests, right? And the vendors maybe have their own tests. Is the EEOC sort of saying these are the tests you need to run? Or how is that going to work from a guidance perspective?
Commissioner Sonderling: Well, it’s really interesting. And this is – you know, when you talk about the global impact that my agency is having in this space, in 1978 the EEOC came out – way before AI technology that we’re seeing now – came out with the Uniform Guidelines on Employment Selection Procedures, which has really been the gold standard for testing employment discrimination, you know, for disparate impact, to see if a neutral job qualification is actually creating discrimination against one group. It’s also known as the four-fifths rule, and has been generally, you know, the global standard for testing disparate impact before.
So saying, you know, post-Industrial Revolution, a lot of employers were giving employee assessment tests to see if – the skills the employees have, and if there was discrimination, there was a way to test for it. But like I said earlier, now these tools are gamifying a lot of those longstanding tests. They’re being able to do it online. You’re being able to do it on your phone. And before a lot of these tests were just left for corporate executives –
Mr. Allen: For folks who are not familiar with the world of employment law, can you just give me an example of what one of these tests would be in the old world?
Commissioner Sonderling: Oh, yeah. Just, you know, like –
Mr. Allen: Pick a job, any job.
Commissioner Sonderling: Yeah. Like, for an executive, you know, there’s a lot of, like, standardized executive employment assessment tests where they’ll say, you know, what your risk is, you know, what – they’ll give you a real-life work scenario and see if you’re going to be risk-adverse or if you are going to be aggressive or not, just some of these – really basic industrial organizational psychology. There’s a whole Ph.D. profession, I/O psychologists, that design and deploy these tests to see what – the skills and characteristics. So for a CEO job you’re going to want to see if that person has certain leadership skills which are based upon these tests. And, you know, they’re very specific to that job, where somebody would say, in a higher-level position, you know, this is certainly a characteristic. But as AI essentially can democratize a lot of these tests across the board, you know, applying a one-size-fits-all test now, because it’s cheaper and faster and you can do it on your phone, to an entry-level position for a skill that’s not necessary may cause discrimination because it’s not relevant to the job and it may impact a certain protected characteristic.
So, again, you see here the common theme. These tools are just taking a lot of the long-standing principles and having algorithm and machine learning, you know, actually deal with it but the results are going to be pretty boring, pretty much the same the way it’s been for employment decision. But my point on that is that there is a testing method to be able to test for employment discrimination using assessments and tools, and that’s what a lot of these AI vendors are starting to do, you know, in the aggregate, but for us it’s only specific to that employee.
Mr. Allen: So the FAA, the Federal Aviation Administration, they will actually certify the software that go into commercial aircraft. I’m assuming the EEOC is not going to get into the business of certifying AI HR software. So how can a vendor or somebody who’s actually going to deploy AI, how can they actually have confidence that the system that they’re going to buy is going to comply with the laws that the EEOC is responsible for enforcing? What would they need to do to know that they’re going to be on the right side of the law?
Commissioner Sonderling: Well, there’s a lot of different things they can be doing. One is really knowing that that liability rests with them, really questioning the vendor and pushing back on how, yes, you can show us how a test about how your product works in the aggregate, but how is it going to work on this job description in this part of the country for the skills that we think are relevant to the job and that we believe are industry standard and show us, adding these skills in for our workforce, whether or not your AI tool is going to find the right candidates or it’s going to discriminate. And then also, you know, I really talk about the training that’s necessary and who has access to these tools within your organization, and that’s something you could work with a vendor with. I’m saying, like – or you’re going to come in and train these certain employees who have EEO training, who have HR experience, because –
Mr. Allen: And EEO is not running AI-related HR training, if I’m not mistaken, for employers.
Commissioner Sonderling: Yeah. We have a lot of outreach we do that brings a lot of our guidance to light, how it works related to this, because let me just, you know, tell you the biggest issue with this. You know, for an HR manager or hiring manager with bias – you know, before if they didn’t want to hire somebody, let’s say, you know, they didn’t want to hire any females for the job – which, of course, is illegal, you can’t do – the style was, before AI technology, six-and-a-half seconds was how much a talent acquisition professional would look at your resume. So I have bias, I have to go through and I want to discriminate – “Oh, that’s a female-sounding name, you know, they went to a woman’s college; it must be a female; put it in the trash.” That takes time to discriminate. With these tools, with AI, you know, in .7 seconds, you know, you can have hundreds of thousands of applicants, all that data in front of you, and with a few clicks you could eliminate all those workers from the applicant pool based upon a protected characteristic – in this example, women – (snaps finger) – and discrimination at a scale we’ve never seen before, having access to all that data. And that’s a fear. We’re now seeing all this information in real time that the AI can pull from our employee database, pull from these applicants, that you’re not allowed to make an employment decision on, that a bad actor within your organization can use, at scale.
And that’s really where it gets into the internal governance as well, but making sure you’re working with the vendors, that only certain people have access to it and they can’t use those tools improperly. And that’s where the intentional discrimination part comes in. You know, so much of the abstract theories about AI discrimination is based upon employment where you have a group of male resumes, more male resumes than female resumes, the computer is automatically going to pick the male over the females because it thinks that that’s the number one indicator, just because there’s more men, some of these very simple data bias examples.
Mr. Allen: Yeah, I think on this point, just to talk about some of the issues because the problem of intentional discrimination, which is what you just described of somebody who says – who tells their AI software, hypothetically, just don’t let any women through –
Commissioner Sonderling: Don’t show this ad to anyone who’s not of this age range.
Mr. Allen: Yeah. And an alternative, you know, path to discrimination would be unintentional discrimination.
Commissioner Sonderling: Correct.
Mr. Allen: And a way that that could work is, for example, if historically women were not given many opportunities to get an education in a certain field – you know, just say, hypothetically, you know, computer science did not do a great job recruiting women into universities in the ’70s – well, then, your data pool of who are my best computer scientists over the past few decades will have this sort of systematic absence – at least on a statistical basis; perhaps not on an absolute basis – of highly skilled women, so this is a historical bias that is present in the data, and if you want to eliminate that discrimination, you need to have sort of a conscious willingness to sort of say, I’m looking at this applicant and the qualifications that he or she brings to bear; I’m not thinking about this sort of data set from the ’70s that has systematically biased, you know, who might be available in the workforce. And that’s an inadvertent bias case, but it’s one that machine learning and these statistical data-based approaches can oftentimes fall prey to. And the EEOC – what I’ve heard consistently from leadership – is the use of AI is no excuse, you know, when it comes to discrimination.
Commissioner Sonderling: Whether you intend to discriminate or not, you’re going to be liable for discrimination. That really goes to the point of working with a vendor and that goes to the design of it, right? So, you know, that’s a perfect example of, you know, men being more dominant in the data set. You know, what is data set for us? The applicant pool, your current employees. And how do we design the programs to completely exclude male versus women and find the underlying skill, which is what these AI tools –
Mr. Allen: It is actually a signal of being able to perform the job.
Commissioner Sonderling: Which is relevant to the job and making a lawful hiring decision and be able to pull that out of there, and then, you know, have what that skill is, and then go look for that skill or up-skill our current employees, or re-skill our employees, or go find where those, you know, employees are. And I think that’s really the next level of this. And, you know, you’ve heard some of the horror stories with the data set discrimination, which you alluded to and I agree, you know, you’re liable for as an employer. So how are the tools going to be designed to be able to actually find what those skills are and not discriminate, and that’s really the tricky part right now.
Mr. Allen: So you’re issuing a lot of guidance, you’re providing training to employers or to the law firms that might counsel employers on these types of topics, but as you alluded to earlier, there’s also an international dimension to all of this. A lot of EEOC guidance not only has relevance to the United States, where it’s directly about enforcing the law, but also abroad, where people are looking to the EEOC for inspiration as to how they should write their own guidance or how they should design their own laws. So what can you tell us about how the EEOC fits into the global, you know, landscape of AI regulation, which I know you’ve spent some time trying to assess how other countries are approaching AI and HR law.
Commissioner Sonderling: It’s a global issue, and especially in the workforce, and these products, you know, largely are being developed here in the United States but they’re being used across the world and they’re being used across the world for global employers, you know, no matter what country you’re in. So, as we start seeing other countries, state capitals also start diving into this equation and wanting to regulate in the HR space – and as you know, in the EU designating it as a high risk –
Mr. Allen: As part of the EU AI Act, right?
Commissioner Sonderling: – as part of the EU AI Act saying it’s a high risk. So because it’s a high risk you’re going to have to do all these additional requirements. New York City has a local law saying that if you’re going to use AI to make employment decisions for hiring and promotions, here’s the additional requirements – what are those additional requirements starting to be? Pre-deployment testing and yearly bias audit testing. OK. So whether you’re in the EU, whether you’re in New York, California, Colorado, all these states that are starting to come out with their own requirements that federal law doesn’t require you to do pre-deployment testing for any kind of employment decision tool, how are we going to do that? And you know, working with colleagues in Brussels, across the world – OK, now that you’re requiring employment discrimination bias auditing for using a high-risk tool in HR, what’s the standard – whether it’s New York, Brussels? Well, let’s look to the EEOC in the United States because, you know, they created the standard that I said earlier in the 1970s.
So, you see, it all comes back where, you know, at the end of the – my whole point is that, as sophisticated as these tools are getting, you know, the results are still governed by a lot of these long-standing principles to see if there’s bias in that decision. And I argue that AI can make this much more transparent – again, caveat: designed properly, used properly – because if you look at how employment decisions are made now and what federal agencies or global employment agencies across the world – how do we measure bias? Somebody is accused of employment discrimination, our investigators show up and we say, did you discriminate against this person? And nobody ever would admit to – very rarely does somebody say, of course I discriminated against this person, right? So you have to do, you know, subpoenas, depositions, investigations to see, you know, is there evidence of intentional discrimination, or was it a policy that caused discrimination?
So that’s – right now we’re left with a black box of the human brain, which is very complicated and very rarely admits to these issues. So using AI in a transparent way, employers can say, here’s how the vendors we selected, here’s how we use it, here’s the skills we ask to look at, here’s how we work with the vendor to do pre-deployment audits, to do yearly bias testing; we saw there was some bias, we corrected it, and you have a contemporaneous record of showing how you use these tools properly. And you can only do that by doing pre-deployment audits, by doing yearly bias audits, as the job requirements change, as the applicant pools change because that’s going to change the variables of the output, and I think that’s really where a lot of countries and states are going by requiring that; even though federal law doesn’t require it, you’re still using our standards. And if you’re doing those audits, you could potentially prevent discrimination before it ever occurring, which we couldn’t do with humans because of the decisions made.
Mr. Allen: That’s fascinating. So I think this is incredibly important, right? So you’ve got the EU AI Act and, as you mentioned, New York, which are requiring pre-deployment testing; they’re requiring yearly bias audits. The EEOC is only requiring that the decisions not be discriminatory. So one is regulating the inputs and the process related to the inputs. You are regulating only the outputs in terms of the employment decisions. But what you’re saying – and I believe what you just said – the only way that you think a company might be able to ensure that the outputs are non-discriminatory if an EEOC investigator comes to them and says, did you discriminate, is that, you know, pre-deployment testing and yearly bias audits, that’s going to be the critical defense, right, that they present to an EEOC investigator in saying, we definitely did not discriminate and here’s why – here’s how we know and show that we did not discriminate. So am I correct in saying that you’re saying the United States and the European Union, for example, might end up in a pretty similar place here?
Commissioner Sonderling: Well, you know, let me just address that states and foreign governments requiring you to do that. The EEOC doesn’t require you to do that.
Mr. Allen: Right.
Commissioner Sonderling: But we encourage you to do it, and if you look at our guidance at the EEOC or the OFCCP at DOL, which deals with federal contractors in this space, we encourage those pre-deployment audits because if you can do something to prevent discrimination before it occurs, then we don’t need to show up.
Mr. Allen: Yeah. (Laughs.)
Commissioner Sonderling: You know, put us out of a job if you’re going to use these tools properly because then nobody’s being discriminated against if you’re dealing it with yourself. And that’s why I’ve been a huge proponent personally of self-governance and self-audits in this space, which is, again, it may be forced upon by certain jurisdictions, but even under federal law, if you’re doing that now in advance without being required to, you know, you can see what the outcomes are and you can adjust them to ensure that there’s no bias, and I think that’s different than we’ve ever seen before. And that’s really where a really good use of the AI can come in and also having that contemporaneous record. And by doing that at the federal level, you’re already going to be in advance compliance with all that’s coming down from all these different governments, you know, across the United States, across the world, and I think that’s where a lot of it’s going, but that’s building that internal program.
Mr. Allen: This is fascinating. So the United States because of the jurisdiction of the EEOC doesn’t require these types of pre-deployment testing or yearly bias audits but they’re encouraging it. And, you know, if you properly use pre-deployment testing and these audits, it could be a critical defense in the event, hypothetically, that employer is sued related to AI-based HR discrimination. But I think the other part of this that’s so interesting is it’s a great argument for international collaboration. You know, if you think about so many American companies that want to do business obviously in the United States but also around the world in places like the European Union, if the sort of standards for what constitutes robust, pre-deployment testing, if what constitutes a robust yearly bias audit, if those are aligned, if we define terms the same way, if we understand the relevance of certain procedures the same way, that can really lower the cost of doing business for American companies abroad.
Commissioner Sonderling: Absolutely. And again, you’re in compliance, you know, with the law and that’s all we can ask for and also allows you to sleep at night knowing that these tools you bought are used properly.
Let me just give you one other point, the realities of the situation, right? As you know, the EEOC, you know, in the United States, if you want to see your employer, whether you work for a private company, state or local, or the federal government, you have to come to our agency first. So we have –
Mr. Allen: This is if you’re the employee and you want to.
Commissioner Sonderling: Employee. Correct. We have a huge jurisdiction, have to deal with almost every case of employment discrimination in the U.S.
Mr. Allen: How many cases a year are we talking?
Commissioner Sonderling: Last year actual charges, we report in the private sector, it was around 83,000.
Mr. Allen: 83,000, so this massive clearing house for every single one of these.
Commissioner Sonderling: 83,000, not including the federal government cases and then we have state and local agencies that also do some of the cases for us. So, you know, you could imagine it’s more than that.
Mr. Allen: It’s a big number.
Commissioner Sonderling: It’s a big number. There’s a lot going on. We got almost 700,000-plus inquiries of employment discrimination. So, you know, we’re busy. So now, you know, we have two cases of algorithmic discrimination, assuming of course you can prove it and show it. You know, we go to one employer who says, hold on, here’s the vendor we used, here’s the questions we asked them, here’s how we worked with them, here’s the policies and procedures and statements we used, here’s how we amended all of our employment policies to include AI, here’s the pre-deployment testing we did, here’s how we certified that what we’re asking the AI to do was based upon industry standard, based upon our own standards, and here are the results of those tests and here is why we felt comfortable using it because it was less discriminatory than other means.
Versus the second employer we go to that says, I don’t know – go ask the vendor. They promised us perfect diversity, equity, and inclusion if we spent millions of dollars on this software and we basically outsourced all of our hiring to this algorithm. Go deal with them.
Who’s going to be in a better position, right? The company that did all that governing, that really tried to ensure that these tools were being used properly, and that’s a mindset shift of how corporations are buying software, right, and it’s so different in this space and that’s why this conversation we’re having is so impactful because, as you quickly picked up on, just doing all that puts you in a much better position knowing you can use these tools as best as you can.
Look, there’s always going to be some potential for discrimination but you can really mitigate that and allow these tools to be used properly and then actually get to that skills-based approach to hiring which is where everyone really wants to be because that’s what the law requires.
Mr. Allen: So for big employers who have a million employees or a hundred thousand employees and have a major HR department that is familiar with these kind of things and maybe even has a software safety review, you know, department I understand all of this.
What about for small businesses who in many cases are used to just saying, the vendor said it was great? You know, how should they think about this story?
Commissioner Sonderling: This is such a great point and one I’m really am passionate about, and there’s so much fear for, you know, small and medium-sized businesses. As you know, these tools are scaled down for small and medium-sized businesses, the same ones they’ll sell to a Fortune 500 company, we’re saying OK, we get it. We need governance. We need testing. But we can’t afford that. We could barely afford to –
Mr. Allen: We might not even have an HR department.
Commissioner Sonderling: Right. We don’t even have an HR department. We’re not a global, you know, company that has ethicists and I/O psychologists to do this.
And what I’d like to point out is that you know what the large companies, the big tech companies out there, what they’re all doing is they’re being very transparent about this and they’re putting out a lot of policies, procedures, of how they’re using AI within their organization both as an employer and as a developer.
If you look at the White House at their bill of rights – the blueprint for the bill of rights, the executive order, the OMB requirements, right, within the federal government, within private organizations, they’re putting all that information out for free. Also, NGOs like OECD, the United Nations, the World Economic Forum, all these groups are putting out very broad principles of AI.
And then you’re getting all these different trade associations from the Chamber of Commerce, you know, to other smaller groups for wherever you’re in giving very specific principles for the specific use. So in employment there’s a lot of resources and checklists that organizations have put out. You know, whether you’re using it in finance, housing, all these specialty groups are putting out principles for free on the internet.
So whether it’s a massive tech company giving you global AI ethical principles online for free that you can copy and paste and integrate within your organization to the specific use within HR, you know, there’s a lot of D.C. groups that have put out a very specific checklist of what to ask vendors and how to implement it within your own organization.
It’s out there for free on the internet. So we’re going to get to that point, you know, where building your own AI governance program – I’m not talking about a global company with hundreds of thousands of workers, you know, and hundreds of different companies that have these huge departments.
But for small and medium-sized business, there’s so many tested resources out there for whatever you want to use AI that if you can integrate those, copy and paste them, because you know what? At the end of the day, it’s better than having nothing.
Mr. Allen: And is there an opportunity for third parties to get involved here where the software vendor says their stuff is great because, of course, the company is going to say their stuff is great. But as a buyer, you know, you need to be cautious and have some skepticism when a vendor tells you that all their stuff is great.
But if a third party ran some kind of testing and said, well, you only have five employees and you probably don’t have great competence in auditing AI HR software but we do this for millions of companies around the world. Trust our opinion. Is that kind of thing useful in an EEOC context or not appropriate?
Commissioner Sonderling: Well, you know, once New York City’s local law 144 put an independent auditor requirement on there which, you know, was controversial in itself for their – you’re seeing other state proposals being vague about whether or not the auditor needed the independence – it’s starting to create a new industry of AI auditors and now a lot of these AI auditors are actually using AI to audit AI –
Mr. Allen: Oh, goodness.
Commissioner Sonderling: – in that sense. So we’re getting –
Mr. Allen: Interesting approach.
Commissioner Sonderling: We’re getting into that world as well. But there’s a lot of resources out there now of, you know, this whole AI auditing industry starting to come online being scaled down, you know, to smaller individualized uses of it to much larger uses of it as well.
But, look, whether you’re doing an auditor with an independent party, whether you’re trying to figure out yourself, from the EEOC’s perspective look what you’re doing. You’re building trust within your organization, one, for your applicants and current employees that you’re trying to use these tools properly. Whether you’re copying these big tech statements within a small business, it’s just sending the trust out there that you care about compliance with the law.
And, again, if the EEOC shows up you show, hey, we really tried. We tried to get this right. This is complicated. We don’t have the resources of a Fortune 5(00) company but we, you know, use these models out there. Not everyone’s going to do that and for all the employers –
Mr. Allen: That could be mitigating circumstances, perhaps, in the –
Commissioner Sonderling: All the employers that don’t do that and, again, just point to the vendor it’s an easier case for us to go after the one that didn’t do any of that versus trying to disprove all that work you did was wrong, right.
So that’s the mindset everyone needs. It’s better to engage. It’s better just try to build your own governance principles for whatever you want to use AI across the board than do nothing, and you can be doing that now based on these old laws.
Mr. Allen: Yeah. So I want to come back to diplomacy and international engagement in one moment. But before I just want to talk about the technology progress and whether or not it changes anything.
EEOC launched this initiative on AI in 2021. The generative AI revolution really hit its stride in 2022. So we’ve gone from, you know, machine learning data analysis to AI text generation. Does that change anything important, from your perspective, or is it mostly the same initiatives continuing and are still relevant?
Commissioner Sonderling: So it’s the same and totally different.
Mr. Allen: OK.
Commissioner Sonderling: So it’s the same in, look, some of the bias issues that exist with, you know, using machine learning old school AI before generative AI exists here. So if you’re using generative AI to create –
Mr. Allen: To write a job description.
Commissioner Sonderling: – a job description or a performance review you have no idea what bias you’re baking in. You have no idea what, you know, data set you’re getting for that’s taking historical bias from somebody else’s organization and then suddenly putting it in yours and we don’t care. You put that in there, that bias, that somebody on the organization –
Mr. Allen: So if somebody uses an online, you know, generative AI large language model like ChatGPT or Claude or whatever they could be subjecting themselves right then and there to a(n) AI discrimination-related EEOC kind of investigation?
Commissioner Sonderling: From a company you never knew existed, from a data set you never knew existed, and it’s yours and now that’s – now you’re liable for whatever that data set the LLM gave you.
But I think – you know, you talk about the differences now and, again, getting back to the same issues we deal with at the EEOC is the generative AI implementation and all – you know, since generative AI came out all these companies want to implement generative AI.
We all hear the stats about how many jobs are going to be displaced by AI, how certain knowledge worker industries are going to be completely gone because generative AI can write movie scripts better, right – can do a lawyer’s job or accountant’s job faster and better.
So what we’re seeing is that companies rushing to implement that because they want the efficiency. They want their employers to be – employees to be 80 percent more efficient, you know, all these crazy stats that are coming out on how generative AI can impact the workforce.
But what are the issues that the EEOC cares about that? Go ahead and implement it and make your employees more efficient. But, you know, what we’re starting to see is a lot of employees distrusting their employers using AI and generative AI, and we’re starting to see a lot of mental health and anxiety claims in the workplace, which we also govern, related to the implementation about this.
So, say, you’ve been at a company for 30 years and you’ve been the best performer, and suddenly now you have to become a prompt engineer. Well, you’re really forcing me to quit, right? Or, you know, I’m intimidated about using these programs or I have a disability, I’m not comfortable using this. And it’s creating a lot of fear in the workplace, and a lot of employees don’t trust their employers that they’re not just going to be training their robot replacements – which is leading to potential claims of discrimination, you know, based upon a whole host of issues where it’s: Well, I’m really an older worker and you’re making this so complicated. You’re trying to push me out. I’m a disabled worker; you’re not giving me more time. Or across generations is that we’re anxious that we’re all going to be laid off once generative AI is there.
So it’s a lot of these basic long-standing HR principles that we can’t lose sight of the fact just because technology is implementing there. So that goes back, again, to the trust that’s needed to implement these programs.
Forget about the law for a moment, right? Just about the trust within your organization to actually use these programs properly, that the employees can actually use them without fear of being terminated, without fear of not being able to use the generative AI to actually get to those results where generative AI has the promise to do.
So that’s how you have to message it and that goes back to those principles that are widely available on the internet about how you can build that trust and here’s how we’re going to be using the AI. Here’s our transparency. Here’s our explainability. A lot of these buzzwords I know – I know you know. Just getting out there and making it specific to each use that you’re going to be implementing.
Mr. Allen: Amazing. So the technology here is, obviously, not going to stop changing. It continues to make incredible progress, and EEOC and organizations around the world are trying to keep up with this, recognize what doesn’t need to change and what might need to change in terms of guidance or standards.
So to what extent do you think it’s appropriate for us to pursue, you know, global standards or consensus on these standards – ethics, governance, guidance, et cetera – and then what are your sort of next steps on improving international collaboration and cooperation on this issue?
Commissioner Sonderling: Well, I think we are getting to the point where there needs to be some sort of global standard when it comes to using AI specifically in the workplace and I say that because, you know, the companies that are really going to be using these at scale, right, are operating in so many different countries with so many complex different employment laws and when you start getting into the global employment law scene some things we require U.S. employers to do here which are not controversial like collecting some racial or ethnicity data from employers would be illegal and have criminal penalties in another country, right.
So, you know, that’s the fine balance. But I think the structure of what you’re starting to see, which is being led by a lot of NGOs, of saying, well, OK, here’s – you know, within the employment sphere, you know, here’s at least the basic principles that you’re going to have whether it’s ensuring employees understand that they’re being subject to an algorithm, which is controversial in itself because there’s no requirement now for U.S. employers to tell an applicant of how they’re actually grading them to see if they’re getting the job.
But, really, I see where the consensus is coming around is that pre-deployment and yearly testing it doesn’t seem to be as controversial as it was and, really, the EU AI Act leading the way and now you’re seeing other states within the U.S. like in Colorado recently and even California’s proposal saying, that’s what we need to do.
If it’s dealing with somebody’s livelihood, it’s dealing with finance, housing, employment, no matter where you are in the world these are civil rights. These are civil liberties. We need to protect these, and how do we protect them? Bias audits, consent, disclosure, and then the big trickier issue, I think, on a global scale, which you also know the EU is taking a lead on, is the vendor liability piece.
And, you know, that would be something that Congress would have to –
Mr. Allen: Because the EU AI Act does go in the direction of vendor liability for certain types of – yeah.
Commissioner Sonderling: Correct. Yeah, and it gets complicated but there is at least more skin in the game than it is now under our laws there. So I think those are –
Mr. Allen: So you’re going to enforce the laws you have, which are explicitly about only employer and user of AI liability. But the EU has gone a little bit more in the direction of vendor liability and that’s a question really for Congress here in the United States.
Commissioner Sonderling: That’s absolutely a question for Congress and saying, OK, well, other people other than the person you work for can make an employment decision.
Mr. Allen: Yeah.
Commissioner Sonderling: And, you know, that’s not an issue we deal with of saying, you know, a third party is essentially – you know, there are some very loose, untested legal theories that are maybe being developed in the court.
But we can’t even get there from a law enforcement perspective, you know, of why not having some of these cases yet is because the employees don’t even know they’re being subject to an algorithm.
Mr. Allen: Yeah.
Commissioner Sonderling: So all this conversation about, you know, what the employees’ rights are when it comes – which other countries are dealing with by saying, if you’re going to be assessed by an algorithm for employment we’re going to require that employer tell you the name of the vendor, tell you exactly what these tools will be doing to assess your employment, and here are all your rights, and we’re seeing some city and state proposals go that way.
So that’s going to also change the dynamics of, you know, how we proceed on these global standards.
Mr. Allen: Really fascinating, and your own journey in this regard, you know, thinking that you’re going to be working in employment law and then discovering that now AI is a key part of, you know, what it means to be advancing employment law and then also discovering you’re almost a diplomat in terms of having to engage with other agencies around the world and understand how they’re thinking about these same issues.
It’s really amazing how AI is a global issue and requiring global collaboration in so many areas that might be a little bit unusual in other decades.
Commissioner Sonderling: And I think employment is easy to understand because everyone’s applied for a job. Everyone’s had a resume. Everyone’s had a boss. Everyone’s got, you know, a pay raise, a demotion, or been – you know, people have been laid off, and that’s what these tools are doing.
So I think when you talk about AI governance specifically within the HR space it’s much more easy to understand than using it for, you know, potentially nuclear weapons or some abstract finance reviewing in documents. Everybody understands this.
Mr. Allen: It’s very tangible.
Commissioner Sonderling: That’s why we need to start there and, you know, if we can create the framework within HR I believe that infrastructure we’re talking about can be really replicated across other organizations because if you’re dealing with people’s civil rights you could use that as a framework for doing it with reviewing documents, right.
So that’s really where I think it’s really important for organizations to start with the HR AI governance because you have to, because these laws apply, you know, to all the tools you’re using AI to make just like they are to every other employment decision you’ve ever made since your company started.
Mr. Allen: Well, Commissioner Sonderling, we really appreciate you spending the time with your – time with us here today and also for your leadership on this incredibly important issue.
Thanks for sharing your insights and thanks very much for what the EEOC is doing on this important topic.
Commissioner Sonderling: Thanks for having me.
Mr. Allen: This concludes our event today around AI and human resources, governance, and regulation. To follow up on a transcript of the event and also to learn about all of the types of work that we’re doing here at the Wadhwani Center for AI and Advanced Technologies please visit our website, which you can find at CSIS.org.
(END.)