06.07.23
On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) issued a technical assistance guidance assessing the potential adverse impact that can result when employers use artificial intelligence (AI) in employment selection procedures. In a question-and-answer format, the EEOC provides insight for employers and software vendors on the application of Title VII to AI technologies used in selection procedures such as hiring, promoting and firing employees. The EEOC makes clear that Title VII and existing EEOC rules and policies apply to the use of these technologies. Indeed, even facially neutral software can violate Title VII if it disparately impacts applicants and/or employees on a basis prohibited by Title VII.
The guidance explains that Title VII applies to AI tools that seem neutral but have the impact of disproportionately excluding people based on protected characteristics under Title VII. More specifically, according to the guidance, if the use of AI causes a substantially lower selection rate for a group of protected individuals compared to other groups, the use of that software will violate Title VII unless the employer can prove that this adverse impact is job-related and consistent with business necessity. For an employer to determine if there is a disparate impact, the selection rate of one group must be compared to the selection rate of another group. Selection rate refers to the number of people hired, promoted or otherwise selected based on automated decision-making software.
The four-fifths rule is a general rule of thumb for determining if one group’s selection rate is substantially different than another. If the ratio between the two groups is less than four-fifths or 80%, the groups are substantially different. That said, the EEOC explains that “the four-fifths rule is merely a rule of thumb,” and may not be the appropriate test for determining if the decision-making tool causes a disparate impact. The guidance specifically states that smaller differences in selection rates can reveal an adverse impact if the AI tools were used to make larger-scale selections.
The EEOC suggests employers ask vendors whether they relied on the four-fifths rule or if they relied on a statistical significance test when testing if their software creates an adverse impact on a characteristic protected under Title VII. Employers should be aware that, just because the AI tool satisfies the four-fifths rule, the difference in selection rate can still produce a disparate impact.
The EEOC warns employers that they can be held responsible for Title VII violations for the use of AI tools, even if they are designed and administered by a third party. If a software vendor is acting on the employer’s behalf, that employer could be held responsible for a Title VII violation. The EEOC recommends that, when employers are deciding whether to rely on a software vendor to develop or administer an AI tool for selection procedures, those employers – at a “minimum” – should inquire about what steps have been taken to determine if the use of this software creates a substantially lower selection rate for specific characteristics. Even then, the employer may still be liable if the vendor incorrectly represents that its software will not result in disparate impact.
Therefore, employers need to do their own due diligence to ensure they are not in violation of Title VII. When creating and considering certain AI tools, the failure to adopt a less discriminatory algorithm that was considered in the development process can create liability.
Further, the EEOC guidance discusses best practices to avoid a disparate impact when using AI. The EEOC strongly encourages employers to conduct ongoing self-audits to ensure their employment practices are not violating Title VII. When warranted, employers should make necessary changes to AI technology based on these assessments. The EEOC encourages employers to be proactive and take steps to reduce a potential disparate impact.
Finally, the EEOC provides some examples of algorithmic decision-making programs that can implicate Title VII. Examples include resume scanners that prioritize certain words, employee monitoring software that rates employees based on certain factors, virtual assistants and chatbots that ask job candidates about their qualifications and reject applications based on predetermined qualifications, video interviewing software that evaluates candidates based on speech and facial expressions, and other software that provides scores for applicants or employees regarding specific characteristics. When using AI programs similar to these examples, employers should continue to assess whether they are causing an adverse impact.
Overall, the EEOC’s guidance encourages employers to be proactive when using AI tools to make sure there is no adverse impact on individuals based on a characteristic protected by Title VII, lest the employer can be held liable.
Co-author Lee Moylan is chair of the labor & employment practice at Klehr Harrison. Co-author, Alyssa Bennett is a Drexel University, Thomas R. Kline School of Law, Co-op intern with the firm.