Many of us have heard of artificial intelligence (AI) programs such as OpenAI’s ChatGPT and Google’s Bard. Maybe you even played around with this technology. What is AI? Back in 1994, John McCarthy defined AI in a paper for Stanford University as follows: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”1
PEOs should be aware of what AI tools can do to assist their clients but should also recognize that this is an emerging technology still subject to flaws. This is because increasingly, employers are using AI in the workplace. Examples of such AI use are to locate, recruit, evaluate, and communicate with job applicants. Employers are also using AI to assist employees with benefits or benefits enrollment, training, writing job descriptions, to avert spam attacks, or to translate documents and forms into foreign languages.
However, there are pitfalls and risks of using AI in the workplace. A recent survey by the American Psychological Association found that 38% of U.S. workers are concerned that AI will replace them and will make their jobs obsolete.2 Concerned workers also may bring discrimination and other claims against an employer for using AI.
Specifically, AI raises a number of questions about whether programs created by humans are inherently flawed and biased. Therefore, the use of AI in the workplace is ripe for employee claims under such laws as Title VII of the Civil Rights Act (Title VII), the Age Discrimination in Employment Act, the Americans with Disabilities Act (ADA) and state law counterparts.
In 2021, the Equal Employment Opportunity Commission (EEOC) formed an initiative to address AI. As part of the initiative, the EEOC pledged to:
Then, on May 18, 2023, the EEOC issued technical guidance on the use of AI to assess job applicants and employees under Title VII. In short, AI tools can violate Title VII under a disparate impact analysis, which looks at whether persons in protected classes (e.g., race, sex or age) are hired at disproportionately lower rates compared to those outside of the protected classes.
Further, EEOC Chairwoman Charlotte Burrows is on record as saying that more than 80% of employers are using AI in some form of their work and employment decision-making. Given the apparent volume of employers using AI, the EEOC will certainly focus on AI-related discrimination in employment.
Note that the EEOC looks at disparate impact discrimination by using the “four-fifths rule” enumerated in 29 C.F.R. § 1607.4(D). According to the four-fifths rule, “a selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) of the rate for the group with the highest rate will generally be regarded by the Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by Federal enforcement agencies as evidence of adverse impact.” The EEOC guidance uses the following example to illustrate this rule: if an algorithm used for a personality test selects Black applicants at a rate of 30% and white applicants at a rate of 60% resulting in a 50% selection rate for Black applicants as compared to white applicants (30/60 = 50%), the 50% rate suggests disparate impact discrimination because it is lower than 4/5 (80%) of the rate at which white applicants were selected.
One example of possible AI bias is related to the EEOC lawsuit against a company that was using AI for job candidate screening (EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565 in U.S. District Court for the Eastern District of New York). That company paid $365,000 to settle the lawsuit in which the EEOC alleged age discrimination by disqualifying more than 200 female workers over the age of 55 and males over 60.3 An applicant who was not considered for a position with the company resubmitted a job application with a more recent birth date but the remainder of the information was identical to the original (rejected) application. She was offered an interview when she presented as being younger.
Other agencies have addressed AI in the workplace. On the same day that the EEOC issued its technical guidance on AI, the Department of Justice posted its own guidance on AI-related disability discrimination and how the use of AI could violate the ADA.
On the state level, Illinois led the way in 2019 with one of the first AI workplace laws, the Artificial Intelligence Video Interview Act, which regulates employers that use an AI to analyze video interviews of applicants for positions based in Illinois. Employers are required to make certain disclosures and obtain consent from applicants if they use AI-enabled video interviews. And, if employers rely solely on AI to make certain interview decisions, they must keep applicant demographic data, including race and ethnicity, which must be submitted annually to the state to look at whether there was racial bias in the use of the AI.
Then came Maryland in 2020, which passed a law restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant.
Takeaways:
This article is designed to give general and timely information about the subjects covered. It is not intended as legal advice or assistance with individual problems. Readers should consult competent counsel of their own choosing about how the matters relate to their own affairs.
REFERENCES
SHARE