Tech Law & Accountability

Corporate Crime in the Algorithm Age: When Tech Companies Break the Law—and Call It Innovation

Published Jan 2026 • 10 min read

Algorithms have become a significant force in nearly every area of our lives, from the job or mortgage qualification process to the hyper-targeted ads we see. As businesses increasingly depend on these systems to drive decisions, regulators have begun to determine that many technology companies, both knowingly and unknowingly, have breached ethical and legal boundaries.

The Case of Implicit Bias: Amazon's discontinued algorithmic hiring process illustrates this danger. By using historical data to identify candidates, the system inadvertently disadvantaged women, leading to a public outcry and the eventual dissolution of the program.

In recent years, AI developers are facing legal challenges resulting from allegations of discrimination against protected groups. Federal courts have started to recognize this trend, notably in Mobley v. Workday, which potentially opens the door for software developers and their clients to be held liable for the consequences of biased algorithmic decisions.

"Treating automated unethical behavior as corporate unethical behavior."

Data Scrapping and Biometric Settlements

New methods of collecting data for machine learning have triggered high levels of governmental intervention. Clearview AI, which built a database of 20 billion images by scraping the internet without consent, was fined £7.5M by the U.K. ICO. In the US, settlements are reaching historic levels: Meta reached a $1.4 billion settlement in Texas for unauthorized biometric tagging, while Clearview reached a $50 million settlement for violating Illinois' BIPA.

Economic Harm: The New Antitrust Frontier

The Department of Justice is currently prosecuting RealPage for allegedly coordinating rental prices through its software algorithm. This illustrates that algorithms are not just tools for individual discrimination, but mechanisms for "hub-and-spoke" price-fixing schemes that create widespread economic harm to consumers.

Algorithm Disgorgement: The FTC has introduced a new enforcement method where companies are mandated to not only stop harmful operations but to delete the algorithms that caused the harm.

Innovation or Public Relations?

Tech firms often frame their policies as "innovative" enhancements to efficiency. Yet, when the end result is discriminatory hiring or the suppression of market competition, these practices move from the realm of innovation into corporate liability. Critics argue that our current laws, created before the age of widespread algorithmic decision-making, must evolve to treat algorithmic harms as traditional forms of liability—similar to pollution or unsafe products.

The industry now faces a critical balance: how to regulate companies without stifling innovation, while creating laws that properly account for effects on consumers and competitors that are far greater than originally anticipated.