Home
Logo

AI Grift Series — Part 3 — Exploitation of Labor

2024 Nov 27


Freedom Technology | AI Grift Series — Part 3 — Exploitation of Labor

Artificial intelligence is celebrated as a transformative force poised to reshape the fabric of human existence. Yet, lurking beneath this polished and curated narrative is a disturbing reality, where the very systems lauded for their ingenuity are underpinned by a hidden lattice of exploitation. In their pursuit of a "safer", more sophisticated AI, tech companies have outsourced the most harrowing tasks to low paid contractors forced to confront humanity's darkest content.


For example, in Kenya, workers earning between $1.32 to $2 an hour have been tasked with labeling grotesque depictions of violence, sexual abuse, and other horrors to train AI models such as ChatGPT. Partnering with the outsourcing firm Sama, OpenAI relied on these individuals to engineer safeguards against harmful content. This labor has extracted a severe human toll — inadequate pay, poor mental health support, in a work environment as exploitative as it is dehumanizing.


This is certainly not an isolated case but emblematic of a systemic problem in the AI industry — a space where profits are underwritten by the silent suffering of unseen laborers.


The question is not just how we build AI, but at what cost — and who bears the weight of its creation?






From Moderation To Training


The exploitation of labor for content moderation can be seen as the first phase in a broader strategy that has now extended to AI training. Content moderation established a playbook — outsourcing emotionally grueling tasks to underpaid workers shielded from the public view. These workers, who are often contracted through third-party firms, labor under tightly controlled conditions, processing graphic and harmful content for the sake of platform safety. This model not only addressed the immediate needs of moderating user-generated content but also normalized a system in which the most unpleasant and critical tasks in tech could be quietly outsourced to vulnerable workers.


This approach has since been replicated in the AI industry, where data labeling jobs mirror the structure and demands of content moderation. Just as moderators sift through posts to uphold community standards, data labelers comb through massive datasets, tagging images, text, and videos — including incredibly disturbing material — to train AI systems. Both roles demand high productivity under suffocatingly intense surveillance while offering inadequate pay and minimal mental health support. The precedent set by content moderation has effectively paved the way for the same labor exploitation practices to proliferate in AI development, raising urgent questions about the human cost of technological progress.




Controversial Companies and Practices


The companies leading AI development have faced mounting criticism for their reliance on exploitative labor practices. Cognizant, Appen, Sama, and others have become emblematic of a new form of global outsourcing. These firms hire contractors to sift through sensitive or explicit material under the guise of building safer, smarter AI systems. While the tech giants funding these efforts — OpenAI, Meta, and Google, among others — enjoy their enormous profits, the workers powering this ecosystem are trapped in a precarious existence. Companies like Cognizant, Appen, and Sama call themselves AI companies, but it would be more accurate to describe them as "labor exploitation companies."


Scandals have further exposed these inequities. Cognizant, for instance, faced backlash after reports revealed its content moderators often processed hundreds of posts daily — up to 400 in some cases — with less than 30 seconds per item and minimal support. Sama, which helped train OpenAI's ChatGPT, came under scrutiny for paying Kenyan workers wages that barely exceeded the local minimum and offering little relief from the mental strain of their assignments.


Appen, a prominent AI data services company, has faced significant controversies that underscore the ethical challenges within the industry. In the beginning of 2024, Appen lost major clients, including Google, Amazon, Facebook, and Microsoft, which accounted for over 60% of its revenue. This decline was attributed to concerns over the quality of services and treatment of contractors. Contractors have accused Appen of enforcing unreasonable deadlines, delaying payments, exhibiting alleged racism in its recruitment processes, and maintaining poor working conditions for those handling AI datasets. These issues have sparked widespread criticism and raised serious questions about the company's labor practices. They underscore the ethical dilemmas in the AI industry, where the relentless pursuit of innovation frequently eclipses the well-being of the human workforce that sustains it.




Displacing Jobs for Profit


The irony of AI's rise is that the very labor used to build these systems is being targeted for obsolescence. Many AI developers openly aim to replace human jobs with automated solutions, driven by a relentless pursuit of profit. From customer service agents to legal analysts, the goal is to make human workers redundant while maximizing efficiency. Yet, little consideration is given to the lives disrupted in the process or the structural inequities that automation exacerbates.


AI advocates often compare this shift to the industrial revolution or the advent of the printing press, framing it as a natural and inevitable progression. But such comparisons overlook the darker realities and social costs of these historical transformations. The printing press, for all its contributions to knowledge dissemination, also gave rise to propaganda machines capable of stoking division and fueling world wars. Similarly, the industrial revolution was marked by harsh working conditions, child labor, and exploitative practices that persisted for decades before reforms took hold. These changes were not unmitigated blessings, and the same is true of AI's rise. Without careful oversight, the pursuit of progress risks not only repeating but also amplifying the exploitation and inequities of the past, all while paving the way for new harms to emerge.


The profit motive that drives AI companies prioritizes cost-cutting and scalability over any ethical considerations. As companies accelerate their push for automation, they create an unsettling paradox — relying on exploited labor to build systems designed to erase the need for labor altogether. The hollow rhetoric of "progress" becomes difficult to reconcile with the reality of displaced workers and communities left without viable alternatives.




Towards Ethical AI


The story of AI is not just one of innovation but also one of exploitation and inequality. The workers who train these systems are erased from the narrative, with their contributions hidden behind abstractions like "machine learning" and "automation." Meanwhile, the march toward replacing human jobs with AI systems proceeds unabated, deepening economic divides.


If artificial intelligence is to truly serve humanity, it cannot be built on the backs of exploited workers. The industry must confront its ethical responsibilities — fair wages, mental health protections, and a commitment to creating — not destroying — opportunities for meaningful work. Technology can be a force for good, but only if it values the humanity of those who power it. The question we must ask is not what these systems can do but who they truly serve.