University of the District of Columbia Law Review
Abstract
This article argues that as AI systems continue and increasingly influence critical life outcomes, the courts must adapt existing legal frameworks to address disparate impact claims arising from AI-driven processes specifically in employment practices. Doing so will ensure redress for individuals who seek justice against AI-driven processes that cause unintended discrimination without explicit intent and hold organizations accountable to produce transparent and accurate AI-driven employment processes.
This article is divided into three distinct parts. Part I provides concise definitions of commonly used relevant AI terminology and provides an overview of the factual and legal history of disparate impact in employment practices, highlighting its significance and traditional application. Part II examines the current legal landscape for addressing disparate impact claims against AI-driven processes by discussing relevant case law, challenges to plaintiffs, and judicial approaches to proving disparate impact in AI systems. Finally, Part III presents recommendations for legal reforms to enhance AI accountability and proposes regulatory standards to help courts and litigants address employment disparate impact claims involving AI.
Recommended Citation
Jasmine Wallace,
Unmasking the Algorithm: Addressing Bias and Accountability in AI-Driven Employment Practices,
28
U.D.C. L. Rev.
(2025).
Available at:
https://digitalcommons.law.udc.edu/udclr/vol28/iss1/18