WORKPLACE CHALLENGES IN USING ARTIFICIAL INTELLIGENCE

BY Gordon M. Berger, Esq.

Partner
Fisher Broyles

November 2023

Many of us have heard of artificial intelligence (AI) programs such as OpenAI’s ChatGPT and Google’s Bard. Maybe you even played around with this technology. What is AI? Back in 1994, John McCarthy defined AI in a paper for Stanford University as follows: “It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”1

PEOs should be aware of what AI tools can do to assist their clients but should also recognize that this is an emerging technology still subject to flaws. This is because increasingly, employers are using AI in the workplace. Examples of such AI use are to locate, recruit, evaluate, and communicate with job applicants. Employers are also using AI to assist employees with benefits or benefits enrollment, training, writing job descriptions, to avert spam attacks, or to translate documents and forms into foreign languages.

However, there are pitfalls and risks of using AI in the workplace. A recent survey by the American Psychological Association found that 38% of U.S. workers are concerned that AI will replace them and will make their jobs obsolete.2 Concerned workers also may bring discrimination and other claims against an employer for using AI.

Specifically, AI raises a number of questions about whether programs created by humans are inherently flawed and biased. Therefore, the use of AI in the workplace is ripe for employee claims under such laws as Title VII of the Civil Rights Act (Title VII), the Age Discrimination in Employment Act, the Americans with Disabilities Act (ADA) and state law counterparts.

In 2021, the Equal Employment Opportunity Commission (EEOC) formed an initiative to address AI. As part of the initiative, the EEOC pledged to:

  • Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.
  • Identify promising practices.
  • Hold listening sessions with key stakeholders about algorithmic tools and their employment ramifications.
  • Gather information about the adoption, design, and impact of hiring and other employment-related technologies.

Then, on May 18, 2023, the EEOC issued technical guidance on the use of AI to assess job applicants and employees under Title VII. In short, AI tools can violate Title VII under a disparate impact analysis, which looks at whether persons in protected classes (e.g., race, sex or age) are hired at disproportionately lower rates compared to those outside of the protected classes.

Further, EEOC Chairwoman Charlotte Burrows is on record as saying that more than 80% of employers are using AI in some form of their work and employment decision-making. Given the apparent volume of employers using AI, the EEOC will certainly focus on AI-related discrimination in employment.

Note that the EEOC looks at disparate impact discrimination by using the “four-fifths rule” enumerated in 29 C.F.R. § 1607.4(D). According to the four-fifths rule, “a selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) of the rate for the group with the highest rate will generally be regarded by the Federal enforcement agencies as evidence of adverse impact, while a greater than four-fifths rate will generally not be regarded by Federal enforcement agencies as evidence of adverse impact.” The EEOC guidance uses the following example to illustrate this rule: if an algorithm used for a personality test selects Black applicants at a rate of 30% and white applicants at a rate of 60% resulting in a 50% selection rate for Black applicants as compared to white applicants (30/60 = 50%), the 50% rate suggests disparate impact discrimination because it is lower than 4/5 (80%) of the rate at which white applicants were selected.

One example of possible AI bias is related to the EEOC lawsuit against a company that was using AI for job candidate screening (EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565 in U.S. District Court for the Eastern District of New York). That company paid $365,000 to settle the lawsuit in which the EEOC alleged age discrimination by disqualifying more than 200 female workers over the age of 55 and males over 60.3 An applicant who was not considered for a position with the company resubmitted a job application with a more recent birth date but the remainder of the information was identical to the original (rejected) application. She was offered an interview when she presented as being younger.

Other agencies have addressed AI in the workplace. On the same day that the EEOC issued its technical guidance on AI, the Department of Justice posted its own guidance on AI-related disability discrimination and how the use of AI could violate the ADA.

On the state level, Illinois led the way in 2019 with one of the first AI workplace laws, the Artificial Intelligence Video Interview Act, which regulates employers that use an AI to analyze video interviews of applicants for positions based in Illinois. Employers are required to make certain disclosures and obtain consent from applicants if they use AI-enabled video interviews. And, if employers rely solely on AI to make certain interview decisions, they must keep applicant demographic data, including race and ethnicity, which must be submitted annually to the state to look at whether there was racial bias in the use of the AI.

Then came Maryland in 2020, which passed a law restricting employers’ use of facial recognition services during preemployment interviews until an employer receives consent from the applicant.

Takeaways:

  • AI should not be solely relied on with respect to employment decisions. If AI is making hiring or termination decisions, management or HR should still review and ensure that decisions are made with an unlawful purpose (i.e., not based on a protect classification, including race, religion, age, gender, disability, etc.).
  • AI is known to make up information. You may have heard about a law firm that used AI for legal research and the AI provided case law citations that did not exist (i.e., fictitious cases) which resulted in the law firm being sanctioned by the court.
  • Employers are not excused from complying with the law if AI gets it wrong. Per the EEOC, employers are liable under Title VII for “algorithmic decision-making tools even if the tools are designed or administered by another entity, such as a software vendor”.4
  • Check state and local law for any AI-specific requirements. For instance, New York City has a law that prohibits employers from making certain employment decisions using AI unless notice has been given to employees or candidates who live in the City.5 And, California Gov. Gavin Newsom recently issued an executive order on IA that includes a number of provisions intended to review potential threats to and vulnerabilities of California’s critical energy infrastructure by the use of GenAI.

 

This article is designed to give general and timely information about the subjects covered. It is not intended as legal advice or assistance with individual problems. Readers should consult competent counsel of their own choosing about how the matters relate to their own affairs.

REFERENCES

  1. www-formal.stanford.edu/jmc/whatisai.pdf
  2. https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring
  3. See: https://www.eeoc.gov/newsroom/eeoc-sues-itutorgroup-age-discrimination
  4. U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, EEOC, https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used
  5. N.Y.C. Local Law No. 144 (2021); N.Y.C. Admin. Code § 20-871 (2023)

SHARE


RELATED ARTICLES

LEGAL - LEGISLATIVE

MEET CONGRESSWOMAN ERIN HOUCHIN

Voters in Indiana’s 9th Congressional district elected Congresswoman Erin Houchin to serve in the United States House of Representatives in November 2022. In doing so, Rep. Houchin became the first woman elected to Congress from her district. She also holds the distinction of being the only person elected to Congress who has worked for a PEO.Rep. Houchin spoke to PEO Insider about her decision to seek public office, her experience working for a PEO, and the policies she champions.

BY

May 2023

2023 DIGITAL TRENDS

Lorem ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into …

BY James Joyce

June/July 2023
CYBERSECURITY - TECHNOLOGY

AI IN CYBERSECURITY: THE GOOD, THE BAD AND BEING ON THE PRECIPICE OF A NEW ERA IN TECHNOLOGY

As you might expect with cybersecurity, battlelines are being drawn between the people creating AI solutions to help protect companies and the people making AI software that is designed to find vulnerabilities in areas designed to protect data; systems; financial and personal information; intellectual property (IP); and Industrial Internet of Things (IIoT) and other IoT devices.

BY Dwayne Smith

September 2023
LEGAL - LEGISLATIVE

NAPEO ADVOCACY DAY IS A HOME RUN

There's an energy around the PEO industry this year that's palpable. Nowhere is that more true than in Washington DC, where we are starting to make our mark as a strong contributor to the vitality and success of the backbone of the economy: small and mid-size businesses. We've got a great story to tell. Help us tell it.

BY THOM STOHLER

August 2023

ADVERTISEMENT

Ad for Sentara Health Plans