Artificial intelligence (AI) has been heralded as a revolutionary technology for employers. Proponents say it can help organizations hire more efficiently, communicate better, and save time and money on arduous and error-prone tasks.
Indeed, all these benefits are in the offing under the right circumstances. However, according to two federal agencies in recently released guidance, there’s a major risk to using AI when hiring and managing employees: violating the Americans with Disabilities Act (ADA).
Various technologies
Generally, AI refers to using computers to perform complex tasks typically thought to require human intelligence. Examples include image perception, voice recognition, decision-making and problem-solving.
AI can take several forms. One is machine learning, which applies statistical techniques to improve machines’ performance of specific tasks over time with little or no programming or human intervention. Another is natural language processing, which uses algorithms to analyze unstructured human language in documents, emails, texts, instant messages and conversation. And a third is robotic process automation, which automates time-consuming repetitive manual tasks that don’t require decision-making.
Agencies’ message
The two federal agencies warning employers about AI’s dangers in an HR context are the U.S. Equal Employment Opportunity Commission (EEOC) and the U.S. Department of Justice (DOJ).
Each issued separate guidance addressing the same fundamental problem: using AI — most notably, machine learning and algorithm-based tools such as natural language processing — can inadvertently lead to discrimination against people with disabilities. The specific employment areas mentioned include:
- Hiring new employees,
- Monitoring job performance,
- Determining pay, and
- Establishing terms and conditions of employment.
The EEOC’s guidance, entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” points to three of the most typical ways that use of AI tools could lead to an ADA violation:
- An employer might fail to provide the reasonable and necessary accommodations for an applicant or employee to receive a fair and accurate rating from an AI-based algorithm.
- An AI tool could reject applicants with disabilities even if they’re capable of performing a given job with one or more reasonable accommodations.
- An employer might unknowingly implement an AI solution in a manner that violates the ADA’s restrictions on medical examinations and inquiries related to disabilities.
The DOJ’s guidance, entitled “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring” mirrors the EEOC’s concerns. Upon release of the document, DOJ Assistant Attorney General Kristen Clarke said the two agencies are “sounding the alarm” on employers’ “blind reliance” on AI tools that could make it harder for people with disabilities to find jobs and thrive in an employment setting — and that this advanced technology could ultimately result in an ADA violation.
Now’s the time
If your organization has been using AI in its hiring and performance management processes, now would be a good time to review that technology for potential flaws and risks. Keep the dangers in mind when choosing and implementing AI solutions as well. To read the full text of the EEOC’s guidance click here. And for the full text of the DOJ’s guidance, click here.
© 2022
---
The information contained in the Knowledge Center is intended solely to provide general guidance on matters of interest for the personal use of the reader, who accepts full responsibility for its use. In no event will CST or its partners, employees or agents, be liable to you or anyone else for any decision made or action taken in reliance on the information in this Knowledge Center or for any consequential, special or similar damages, even if advised of the possibility of such damages.