New EEOC guidance advises employers to ensure that any hiring tools based on algorithms or artificial intelligence (AI) do not negatively impact applicants with disabilities. This obligation includes offering reasonable accommodations to applicants in hiring practices that incorporate AI or algorithmic decision-making.

Many employers use AI-based hiring tools such as “chatbots,” scanners that evaluate resumes based upon key words, video interviewing software that evaluates applicant performance, or testing and monitoring software that measures desired characteristics or skills. The guidance warns of three common applications of these tools that could violate the Americans with Disabilities Act:

  • Failing to provide a reasonable accommodation necessary for an applicant to be evaluated fairly by an algorithm or AI-based tool.
  • Using a decision-making tool that “screens out” an individual with a disability by preventing the applicant from meeting selection criteria due to a disability.
  • Using a decision-making tool that incorporates disability-related inquiries or medical examinations.

Employers are responsible for vetting potential bias in AI-based hiring tools—even if the software is provided by a vendor. The EEOC’s guidance provides questions employers should ask vendors in assessing whether their software is compliant with the ADA.

The guidance provides other practical steps for reducing the chances that algorithmic decision-making will screen out an individual because of a disability, including:

  • Informing applicants that reasonable accommodations are available.
  • Providing alternative testing or evaluation if an applicant has previously scored poorly due to a disability.
  • Providing applicants information about algorithmic hiring tools that are used, including the traits or characteristics measured and any disabilities that may negatively impact an applicant’s result.
  • Basing traits and characteristics that are evaluated by AI on necessary job qualifications.
  • Selecting algorithmic evaluation tools designed with accessibility for individuals with disabilities in mind.

The guidance is part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, a new agency-wide program that is holding listening sessions, identifying best practices, and issuing guidance to assist employers.  The EEOC also recently took up a case involving discriminatory hiring software that was programmed to automatically reject applicants over a certain age. Although that case involved the intentional use of hiring technology to discriminate, it signals the EEOC’s increased scrutiny of these tools and their effects on applicants.

The Federal Trade Commission, which monitors companies for unfair or deceptive business practices, has also recently aimed a spotlight on AI. In April 2021, the agency released informal guidance advising companies to be aware of potential bias when using algorithmic decision-making software. In December 2021, the FTC announced its desire to pass rules ensuring that algorithmic decision-making does not result in unlawful discrimination. In recent years, the FTC has also handled several complaints regarding the unfair use of AI and algorithmic tools in hiring, including one related specifically to hiring tools.

State and local governments have been more active in addressing the potential for bias when using AI. The 2020 Illinois Artificial Intelligence Video Interview Act already reflects some of the best practices contained in the EEOC’s guidance, including requiring employers to notify applicants if AI is used to analyze video-recorded interviews and obtain their consent in advance. Employers must also describe how the AI technology works and what characteristics it will measure. A recent amendment now requires employers who use AI-based video interview analysis to report demographic information to the state to be evaluated for potential bias. Maryland passed a similar law in 2020 that prohibits the use of facial recognition technology during the hiring process without the applicant’s consent. Effective January 1, 2023, New York City employers will be prohibited from using automated decision-making tools to screen applicants and evaluate employees unless the tool has undergone a “bias audit.” The use of facial recognition technology and other tools involving an applicant’s biometric characteristics is also regulated by state laws in Illinois, Texas, and Washington

While the EEOC’s new guidance is a big step toward helping employers evaluate their hiring practices for potential disability bias, it leaves many issues unaddressed. For example, the guidance recognizes that algorithmic decision-making tools may also negatively affect applicants due to other protected characteristics such as race or sex, but the guidance is limited to disability-related considerations alone. And although the EEOC’s action highlights the potential for disability discrimination related to hiring tools, there are a variety of other employment and privacy laws potentially affecting the use of algorithmic decision-making in the workplace. Additionally, employers should be prepared for more state or local rules regulating the use of AI, facial recognition, and biometrics in the coming years.

For more practical guidance on implementing AI-based technologies in the workplace, see our HR Checklist for Using Artificial Intelligence, Facial Recognition, and Biometrics or contact our Employment & Labor Practice Group or Privacy & Data Security Practice Group.