Squire Patton Boggs Intern Ruzanna Mirzoyan discusses the EEOC’s focus on artificial employment tools in employment recruitment and hiring decisions.

Job applicants might be surprised to learn that their resume may need to impress an artificial intelligence (“AI”) algorithm before they can score an interview. A significant (and growing) number of employers currently use AI during the hiring and recruiting process. According to a February 2022 survey from the Society for Human Resource Management, about one in four employers use AI or an automated algorithm to help with hiring and recruiting decisions, with larger employers more likely to use AI tools in their hiring process.

Employers rely on various talent acquisition tools, ranging from resume sorting to skills and behavioral assessments. AI screening tools use programs like machine learning and natural language processing to efficiently obtain information from resumes and other application materials. AI can also be used to analyze video interviews by assessing a candidate’s voice and facial expressions to evaluate personality traits. AI allows employers to expedite the hiring process and decrease costs associated with talent acquisition.

Although AI is a cost-effective strategy to screen a large volume of applications, the Equal Employment Opportunity Commission (“EEOC”) has expressed concern over AI algorithms integrating discriminatory hiring practices. The concern with AI algorithms surfaced in 2021, with an agency wide initiative to ensure that AI software complies with federal civil rights laws. The initiative examined how the foundation of employment decisions will change with technology, and guides applicants, employees, employers, and AI vendors to ensure the fair use of AI technologies.

In May 2022, in conjunction with the Department of Justice, the EEOC issued guidance placing the burden on employers to do their due diligence in vetting AI tools, monitoring for AI disability bias, and preparing reasonable accommodations for applicants unable to use AI-interfaced recruiting platforms. (See our previous post) The guidance explains that an employer can violate the Americans with Disabilities Act (“ADA”) if an AI algorithm utilized by the employer drives out an applicant with a covered disability. For example, if a chat bot screens out applicants with employment gaps – gaps that may be due to a covered disability – it could result in an ADA violation. Other practices such as personality tests and camera sensors can insinuate an ableist bias. Consequently, the guidance encourages employers to perform an independent audit of their AI tools. 

Recently, on January 10, 2023, the EEOC published a draft of its Strategic Enforcement Plan (“SEP”) for Fiscal Years 2023-2027, detailing a road map regarding the agency’s enforcement priorities in the upcoming years. The SEP is directed towards eliminating barriers in talent acquisition for religious groups, LGBTQI+ applicants, older workers, and those with disabilities. The SEP also aims to protect vulnerable and underserved workers who may be unaware of or reluctant to exercise their rights. Other objectives include equal pay and preserving access to the legal system. However, one of the SEP’s main goals is to monitor the use of AI, and on January 31, 2023, the EEOC held a public hearing and discussed how AI can hinder or support diversity, inclusion, equity, and accessibility in the workplace.

The Main Concerns with AI Use

One of the SEP’s main concerns is eliminating artificial and unintentional barriers created by AI algorithms during hiring and recruiting. The EEOC notes three ways that discrimination can occur in employment decisions.

  1. The first concern is using AI to target job postings and advertisements where certain processes exclude or negatively affect protected groups. For example, the EEOC recently filed a suit against an English language tutoring service for age discrimination. The EEOC alleges that the employer’s AI algorithm automatically rejects female applicants over age 55 years and male applicants over age 60.
  2. Secondly, the use of AI could hinder and restrict the application process, especially with online systems. Online application systems are hard to access for those with intellectual or developmental disabilities or for those with a limited English proficiency. Thus, one of EEOC’s goals is providing technical assistance. The EEOC further notes the three instances in which liability under the ADA can arise with AI use:
    • when an employer does not provide a reasonable accommodation to fairly and accurately rate an individual;
    • when the AI tool screens out a disabled individual, overlooking the applicant’s ability to do the essential functions of a job with a reasonable accommodation; and
    • when the AI tool violates the ADA’s limits on questions regarding disability and medical examinations.
  3. Lastly, using AI may disproportionately impact prospective and current employees based on their protected status. For example, Title VII, the Age Discrimination in Employment Act, and the ADA each have long held that screening procedures like pre-employment tests, interviews, and promotion tests disparately impact applicants based on their protected status.

Logistics and Implications of the SEP

The EEOC suggests that it will enforce anti-discrimination laws equally, regardless of whether the claims arise from an algorithm or a human. Thus, if an algorithm exhibits discriminatory patterns, the employer will nonetheless be held responsible under the ADA, even when a third-party codes and runs the AI technology. Further, the SEP is broad in scope and discusses a variety of potentially problematic tools including “automatic resume screening software, hiring software, chatbot software for hiring workflow, video interviewing, employee monitoring software, and worker management software.” The SEP additionally suggests that employers could be liable if their AI vendor fails to provide accommodations while administering the AI tool.

The EEOC’s enforcement plan is built on communication and collaboration between its headquarters and enforcement units. To encourage compliance and make information clear and accessible for everyone, the EEOC also will implement educational and outreach activities. However, one crucial enforcement obstacle for the EEOC is identifying previous cases of AI discrimination, mainly due to a lack of federal provisions that obligate employers to disclose their AI technology use. Congress may pass laws requiring disclosure from employers on their AI use, but such legislation may take years.

In the interim, some states and cities have enacted their own laws with respect to AI use in employment. In 2019, the Illinois legislature passed the Artificial Intelligence Video Interview Act. The Act requires employers to notify applicants for Illinois based positions of the employer’s intent to electronically analyze video interviews, explain how the AI technology works, and obtain the applicant’s consent to the procedure. Additionally, the City of New York will enforce its AI law – Local Law No. 144 – on April 15, 2023. The law limits the use of broad-spectrum AI tools to review, rank, select, and rank or reject applicants for employment or advancement. Under Law 144, employers are required to do the following:

  1. perform independent audits of the AI tool;
  2. notify candidates before subjecting them to the tool:
  3. inform candidates of the job qualifications and characteristics used by the tool;
  4. disclose the type of data and retention policy used by the tool; and
  5. allow candidates to abstain and request another selection process or an accommodation.

Law 144 imposes civil penalties of $500 to $1,500 per violation of any of the above requirements, and provides applicants and employees with a private right of action.

Takeaways for Employers:

Currently the SEP is open for public comment until February 9, 2023, and the final plan is subject to a vote by the EEOC’s Commissioners. It is also important to remember that although the SEP is not binding, it is still a clear indication of where the EEOC stands on these issues. Therefore, it is important for employers to take proactive measures to avoid inadvertent violations. Since the EEOC will not differentiate between the source of the discrimination – human or AI – employers cannot shift the blame to an AI algorithm or to a vendor programing and administering the AI tool. An employer could still be held liable under the ADA despite unintentional and unknown violations.

Overall, guidance on AI use in relation to protected groups is limited. Due to scarce resources, the Institute for Workplace Equality published a report with suggestions for employers on valuable AI practices. Despite the lack of guidance, employers should nonetheless take on a proactive approach with their AI use. One way employers can prevent problems from arising is through open communication with applicants. Employers should consider transparency about their AI use, ask applicants for consent, and continuously audit their AI technologies to watch for any patterns of discrimination or other artificial barriers. Employers should also stay up to date with new EEOC initiatives and pay close attention to new guidance and enforcement plans.

Accordingly, employers hoping to use or to continue relying on AI for employment decisions should understand and manage risks in order to avoid the foreseeable enforcement actions and potential future legislation on disclosure of AI use. The use of AI is a new and developing trend, so employers with questions about using AI technology for talent acquisition and other employment decisions should consult with their employment counsel.