Algorithmic Wage Discrimination (2023)

Recent technological developments related to the extraction and processing of data have given rise to concerns about a reduction of privacy in the workplace. For many low-income and subordinated racial minority workforces in the United States, however, on-the-job data collection and algorithmic decisionmaking systems are having a more profound yet overlooked impact: These technologies are fundamentally altering the experience of labor and undermining economic stability and job mobility. Drawing on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers, this Article examines the historical rupture in wage calculation, coordination, and distribution arising from the logic of informational capitalism: the use of granular data to produce unpredictable, variable, and personalized hourly pay.

The Article constructs a novel framework rooted in worker on-the- job experiences to understand the ascent of digitalized variable pay practices, or the importation of price discrimination from the consumer context to the labor context—what this Article identifies as algorithmic wage discrimination. Across firms, the opaque practices that constitute algorithmic wage discrimination raise fundamental questions about the changing nature of work and its regulation. What makes payment for labor in platform work fair? How does algorithmic wage discrimination affect the experience of work? And how should the law intervene in this moment of rupture? Algorithmic wage discrimination runs afoul of both longstanding precedent on fairness in wage setting and the spirit of equal pay for equal work laws. For workers, these practices produce unsettling moral expectations about work and remuneration. The Article proposes a nonwaivable restriction on these practices.

The full text of this Article can be found by clicking the PDF link to the left.

* Professor of Law, University of California, Irvine; Postdoctoral Fellow, Stanford University; Ph.D. 2014, University of California at Berkeley; J.D. 2006, University of California at Berkeley School of Law; B.A. 2003, Stanford University. I thank Aziza Ahmed, Amna Akbar, Abbye Atkinson, Aslı Bâli, Corinne Blalock, James Brandt, Raúl Carillo, Angela Harris, Amy Kapczynski, K-Sue Park, Fernando Rojas, Karen Tani, and Noah Zatz, all of whom offered comments on an early conceptualization of this Article. I am also grateful to Yochai Benkler, Scott Cummings, Sam Harnett, Sarah Myers West, Aziz Rana, Aaron Shapiro, and Meredith Whittaker, who provided critical feedback on drafts and to the brilliant editors of the Columbia Law Review, especially Zakiya Williams Wells. This Article was written at the Center for Advanced Study in the Behavioral Sciences at Stanford University, where I was a fellow from 2022 to 2023. I dedicate it to John Crew, a wonderful mentor and dear friend whose lifelong dedication to justice and fairness shaped both my understandings of and ways of being in this world and who died while I was writing it.

INTRODUCTION
Over the past two decades, technological developments have ushered in extreme levels of workplace monitoring and surveillance across many sectors.
These automated systems record and quantify workers’ movement or activities, their personal habits and attributes, and even sensitive biometric information about their stress and health levels.
Employers then feed amassed datasets on workers’ lives into machine learning systems to make hiring determinations, to influence behavior, to increase worker productivity, to intuit potential workplace problems (including worker organizing), and, as this Article highlights, to determine worker pay.

To date, policy concerns about growing technological surveillance in the workplace have largely mirrored the apprehensions articulated by consumer advocates. Scholars and advocates have raised concerns about the growing limitations on worker privacy and autonomy, the potential for society-level discrimination to seep into machine learning systems, and a general lack of transparency on workplace rules.
For example, in October 2022, the White House Office of Science and Technology Policy released a non-legally-binding handbook identifying five principles that “should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”
These principles called for automated systems that (1) were safe and effective, (2) protect individuals from discrimination, (3) offer users control over how their data is used, (4) provide notice and explanation that an automated system is being used, and (5) allow users access to a person who can remedy any problems they encounter.
The Blueprint for an AI Bill of Rights (hereinafter Blueprint) specified that these enumerated rights extended to “[e]mployment-related systems [such as] . . . workplace algorithms that inform all aspects of the terms and conditions of employment including, but not limited to, pay or promotion, hiring or termination algorithms, virtual or augmented reality workplace training programs, and electronic workplace surveillance and management systems.”

Under each principle, the Blueprint provides “illustrative examples” of the kinds of harms that the principle is meant to address. One such example, used to specify what defines unsafe and ineffective automation in the workplace, involves an unnamed company that has installed AI-powered cameras in their delivery vans to monitor workers’ driving habits, ostensibly for “safety reasons.” The Blueprint states that the system “incorrectly penalized drivers when other cars cut them off . . . . As a result, drivers were incorrectly ineligible to receive a bonus.”
Thus, the specific harm identified is a mistaken calculation by an automated variable pay system developed by the company.

What the Blueprint does not specify, however, is that the company in question—Amazon—does not directly employ the delivery workers. Rather, the company contracts with Delivery Service Providers (DSPs), small businesses that Amazon helps to establish. In this putative nonemployment arrangement, Amazon does not provide to the DSP drivers workers’ compensation, unemployment insurance, health insur-ance, or the protected right to organize. Nor does it guarantee individual DSPs or their workers minimum wage or overtime compensation.
Instead, DSPs receive a variable hourly rate based on fluctuations in demand and routes, along with “bonuses” based on a quantified digital evaluation of on-the-job behavior, including “service, safety, [and] client experience.”
DSPs, while completely reliant on Amazon for business, must hire a team of drivers as employees.
These Amazon-created and -controlled small businesses rely heavily on their automated “bonuses” to pay for support, repairs, and driver wages.
As one DSP owner–worker complained to an investigator, “Amazon uses these [AI surveillance] cameras allegedly to make sure they have a safer driving workforce, but they’re actually using them not to pay [us] . . . . They just take our money and expect that to motivate us to figure it out.”

Presented with this additional information, we should ask again: What exactly is the harm of this automated system? Is it, as the Blueprint states, the algorithm’s mistake, which prevented the worker from getting his bonus? Or is it the structure of Amazon’s payment system, rooted in evasion of employment law, data extraction from labor, and digitalized control?

Amazon’s automated control structure and payment mechanisms represent an emergent and undertheorized firm technique arising from the logic of informational capitalism: the use of algorithmic wage discrimination to maximize profits and to exert control over worker behavior.
“Algorithmic wage discrimination” refers to a practice in which individual workers are paid different hourly wages—calculated with ever-changing formulas using granular data on location, individual behavior, demand, supply, or other factors—for broadly similar work. As a wage-pricing technique, algorithmic wage discrimination encompasses not only digitalized payment for completed work but, critically, digitalized decisions to allocate work, which are significant determinants of hourly wages and levers of firm control. These methods of wage discrimination have been made possible through dramatic changes in cloud computing and machine learning technologies in the last decade.

Though firms have relied upon performance-based variable pay for some time (e.g., the use of bonuses and commission systems to influence worker behavior),
my research on the on-demand ride hail industry suggests that algorithmic wage discrimination raises a new and distinctive set of concerns. In contrast to more traditional forms of variable pay, algorithmic wage discrimination—whether practiced through Amazon’s “bonuses” and scorecards or Uber’s work allocation systems, dynamic pricing, and wage incentives—arises from (and may function akin to) the practice of “price discrimination,” in which individual consumers are charged as much as a firm determines they may be willing to pay.
As a labor management practice, algorithmic wage discrimination allows firms to personalize and differentiate wages for workers in ways unknown to them, paying them to behave in ways that the firm desires, perhaps for as little as the system determines that the workers may be willing to accept.
Given the information asymmetry between workers and firms, companies can calculate the exact wage rates necessary to incentivize desired behaviors, while workers can only guess how firms determine their wages.

The Blueprint example underscores how algorithmic wage discrimination can be “ineffective” and rife with calculated mistakes that are difficult to ascertain and correct. But algorithmic wage discrimination also creates a labor market in which people who are doing the same work, with the same skill, for the same company, at the same time may receive different hourly pay.
Digitally personalized wages are often determined through obscure, complex systems that make it nearly impossible for workers to predict or understand their constantly changing, and frequently declining, compensation.

Drawing on anthropologist Karl Polanyi’s notion of embeddedness—the idea that social relations are embedded in economic systems
—this Article excavate the norms around payment that constitute what one might consider a moral economy of work to help situate this contemporary rupture in wages.
Although the United States–based system of work is largely regulated through contracts and strongly defers to the managerial prerogative,
two restrictions on wages have emerged from social and labor movements: minimum-wage laws and antidiscrimination laws. Respectively, these laws set a price floor for the purchase of labor relative to time and prohibit identity-based discrimination in the terms, con-ditions, and privileges of employment, requiring firms to provide equal pay for equal work.
Both sets of wage laws can be understood as forming a core moral foundation for most work regulation in the United States. In turn, certain ideals of fairness have become embedded in cultural and legal expectations about work. Part I examines how recently passed laws in California and Washington State, which specifically legalize algorithmic wage discrimination for certain firms, compare with and destabilize more than a century of legal and social norms around fair pay.

Part II draws on first-of-its-kind, long-term ethnographic research to understand the everyday, grounded experience of workers earning through and experiencing algorithmic wage discrimination. Specifically, Part II analyzes the experiences of on-demand ride-hail drivers in California before and after the passage of an important industry-initiated law, Proposition 22, which legalized this form of variable pay. This Part illuminates workers’ experiences under compensation systems that make it difficult for them to predict and ascertain their hourly wages. Then, Part II examines the practice of algorithmic wage discrimination in rela-tionship to workers’ on-the-job meaning making and their moral interpretations of their wage experiences.
Though many drivers are attracted to on-demand work because they long to be free from the rigid scheduling structures of the Fordist work model,
they still largely conceptualize their labor through the lens of that model’s payment structure: the hourly wage.
Workers find that, in contrast to more standard wage dynamics, being directed by and paid through an app involves opacity, deception, and manipulation.
Those who are most economically dependent on income from on-demand work frequently describe their experience of algorithmic wage discrimination through the lens of gambling.
As a normative matter, this Article contends that workers laboring for firms (especially large, well-financed ones like Uber, Lyft, and Amazon) should not be subject to the kind of risk and uncertainty associated with gambling as a condition of their work. In addition to the salient constraints on autonomy and threats to privacy that accompany the rise of on-the-job data collection, algorithmic wage discrimination poses significant problems for worker mobility, worker security, and worker collectivity, both on the job and outside of it. Because the on-demand workforces that are remunerated through algorithmic wage discrimination are primarily made up of immigrants and racial minority workers, these harmful economic impacts are also necessarily racialized.

Finally, Part III explores how workers and worker advocates have used existing data privacy laws and cooperative frameworks to address or at least to minimize the harms of algorithmic wage discrimination. In addition to mobilizing against violations of minimum-wage, overtime, and vehicle reimbursement laws, workers in California—drawing on the knowledge and experience of their coworkers in the United Kingdom—have developed a sophisticated understanding of the laws governing data at work.
In the United Kingdom, a self-organized group of drivers, the App Drivers & Couriers Union, has not only successfully sued Uber to establish their worker status
but also used the General Data Protection Regulation (GDPR) to lay claim to a set of positive rights concerning the data and algorithms that determine their pay.
As a GDPR-like law went into effect in California in 2023, drivers there are positioned to do the same.
Other workers in both the United States and Europe have responded by creating “data cooperatives” to fashion some transparency around the data extracted from their labor, to attempt to understand their wages, and to assert ownership over the data they collect at work.
In addition to examining both approaches to addressing algorithmic wage discrim-ination, this Article argues that the constantly changing nature of machine learning technologies and the asymmetrical power dynamics of the digitalized workplace minimize the impact of these attempts at trans-parency and may not mitigate the objective or subjective harms of algorithmic wage discrimination. Considering the potential for this form of discrimination to spread into other sectors of work, this Article proposes instead an approach that addresses the harms directly: a narrowly structured, nonwaivable peremptory ban on the practice.

While this Article is focused on algorithmic wage discrimination as a labor management practice in “on-demand” or “gig work” sectors, where workers are commonly treated as “independent contractors” without protections, its significance is not limited to that domain. So long as this practice does not run afoul of minimum-wage or antidiscrimination laws, nothing in the laws of work makes this form of digitalized variable pay illegal.
As Professor Zephyr Teachout argues, “Uber drivers’ experiences should be understood not as a unique feature of contract work, but as a preview of a new form of wage setting for large employers . . . .”
The core motivations of labor platform firms to adopt algorithmic wage discrimination—labor control and wage uncertainty—apply to many other forms of work. Indeed, extant evidence suggests that algorithmic wage discrimination has already seeped into the healthcare and engineering sectors, impacting how porters, nurses, and nurse practitioners are paid.
If left unaddressed, the practice will continue to be normalized in other employment sectors, including retail, restaurant, and computer science, producing new cultural norms around compensation for low-wage work.
The on-demand sector thus serves as an important and portentous site of forthcoming conflict over longstanding moral and political ideas about work and wages.

{Categories} *ALL*{/Categories}
{URL}https://columbialawreview.org/content/on-algorithmic-wage-discrimination/{/URL}
{Author}unknown{/Author}
{Image}https://columbialawreview.org/wp-content/uploads/2016/01/CLR-share.png{/Image}
{Keywords}{/Keywords}
{Source}Implications{/Source}
{Thumb}{/Thumb}

Exit mobile version