From Hire to Fire: How AI is Reshaping Civil Liability in the Workplace

Daniel Diez – Being fired by a robot once felt like something out of a movie. Now, it’s business as usual. Artificial intelligence (“AI”) is being used for hiring, evaluation, promotion, demotion, and termination decisions across the United States. The growing reliance on AI for employment decisions has created new legal risks, exposing businesses to new forms of civil liability. When AI systems make decisions without any human oversight, both the businesses who use them and the vendors who design them may face liability for resulting harm. Potential claims range from discrimination and wrongful termination to negligence, product liability, breach of contract, and privacy or ethical violations. Lawyers and judges must prepare to litigate and adjudicate cases in which a robot acts as a human resource manager. Existing liability frameworks, which were built for human judgement, must evolve to address the new reality of the “AI boss.”

AI refers to computer systems that can perform tasks requiring human-like intelligence by learning from data, recognizing patterns, and making decisions. It is a powerful tool businesses can use to increase efficiency and productivity, improve quality control and employee safety, and cut labor costs. About 85% of large companies have started implementing AI systems into their workplaces and about 88% of companies use AI for initial candidate screening. As AI use in businesses becomes the norm, more resumes are being scanned by algorithms, interviews are being scored by voice-analysis software, and keystrokes and mouse movements are being logged by surveillance machines. Despite its popularity, AI’s reliability in hiring is susceptible to bias and inaccuracies. Businesses that delegate hiring and firing decisions to AI systems invite a new generation of discrimination and wrongful termination claims.

AI systems used in employment decisions can undermine employees’ rights when they rely on biased input data without any meaningful human oversight, potentially replicating discriminatory results. Affected plaintiffs may sue under the disparate-impact discrimination theory, claiming that a neutral policy disproportionately harmed a protected group, even without discriminatory intent.

In EEOC v. iTutorGroup, Inc. et al. (2023), the Equal Employment Opportunity Commission (“EEOC”) alleged that a tutoring company’s AI hiring software automatically rejected older job applicants in violation of the Age Discrimination in Employment Act (“ADEA”), marking the agency’s first lawsuit involving discriminatory use of AI in the workplace. Essentially, the tutoring company’s robot was a “smart filter” that systematically excluded certain resumes. The company settled for $365,000 and agreed to restrict its use of automated hiring tools.

While iTutorGroup involved a direct enforcement action, private plaintiffs are also turning to the courts to challenge AI-based hiring practices. In Mobley v. Workday, Inc. (2024), a man over forty years old sued a human resource platform after submitting over one hundred applications through Workday-powered systems and being rejected each time. He alleged that the AI screening tools relied on biased training data and discriminated against him and similarly situated applicants on the basis of race, age, and disability. The court allowed the claim to proceed as a nationwide collective action under a disparate-impact theory, highlighting the greater potential for liability when protected groups are affected.

While AI-based hiring can lead to discrimination claims, AI-based firing can lead to wrongful termination claims. When an employee is fired by a robot, the law must focus on the need for human involvement to ensure the fairness and accuracy of AI-driven employment decisions.

The 2021 termination of an Amazon Flex driver who was “abruptly deactivated by an automated system after years of high performance” shows the risks posed by a lack of human oversight in AI management. After the worker complained, he encountered the same indifference that fired him: automated and generic replies. Though one example, this story shows the serious risks and unfair consequences that can result from the extensive use of AI in employment termination decisions. As employees are increasingly being monitored, scored, and removed from their work platforms without meaningful oversight or recourse, the law must build on fundamental legal principles with an emphasis on ethics, transparency, and human involvement to ensure that automated decisions are fair.

Lawmakers are already moving in that direction. The No Robot Bosses Act of 2024, a proposed piece of federal legislation, aims to address AI’s risks by prohibiting employers from relying exclusively on AI. It would require pre-deployment and periodic testing, training and human oversight, and timely disclosures from employers on the use of AI. State and local governments are also moving quickly. For instance, New York City’s Local Law 144 prohibits employers from using AI for certain employment decisions unless the employer makes sure the tool was audited for bias within the past year. In California, the Civil Rights Council’s 2025 regulations expand employer liability for discriminatory AI use by broadening who counts as an “agent,” extending record-keeping requirements, and emphasizing the evidentiary value of employers’ proactive anti-bias testing. A comprehensive legislative survey conducted by Thomson Reuters shows that many state measures focus on ensuring human oversight, including mandatory human review and prohibitions on allowing AI to be the sole decision-maker.

These emerging legal frameworks are setting the stage for future claims involving AI in the workplace. And while these initiatives show progress, businesses still bear the ultimate responsibility to use robots ethically and scrutinize the data their systems rely on. Many proposed regulations don’t create new obligations; they simply clarify that existing employment laws already apply when employers delegate decisions to AI. Therefore, employers are expected to monitor, audit, and validate AI systems to prevent adverse employment actions that lack genuine human judgment. Regulation is not a substitute for responsibility. Businesses must implement clear oversight frameworks to ensure that AI remains a tool for decision support, not an unchecked decision-maker.

As the “AI boss” takes shape, courts will be tasked with applying laws that did not contemplate businesses managed by robots. New legal and ethical questions will arise, and civil liability frameworks must continue to evolve to provide answers. Judges and lawyers will soon have to stretch old doctrines to reach new actors: robots, their creators, and employers who rely on them.