Big Tech Wants AI Without Rules. Here’s How Workers are Fighting Back.

Earlier this month, a report from Senator Bernie Sanders grabbed headlines for the provocative claim that AI-driven automation could destroy 100 million jobs across the country in the next decade. 

While the report sparked a debate about which occupations are most vulnerable to automation, the real issue is who gets to decide how AI is deployed in the workplace, what purpose is served, and who benefits. 

Both businesses and governments have already begun to use AI as a tool for surveillance and union busting. Meanwhile, the tech companies developing these AI systems are investing hundreds of millions of dollars in lobbying to block legislation that would impose any guardrails on AI. 

But where workers have organized and built collective power, they are successfully establishing protections against harmful uses of AI in the workplace through collective bargaining, leveraging union representation to shape AI policy at the state and federal level, and turning these same technologies into tools for organizing. Where workers are meaningfully involved in decisions about AI, they are also demonstrating how these systems can be designed and deployed to ensure that technology serves people rather than replaces them.

Corporate Dominance of AI is Damaging to Worker Power and Democracy

Employers’ unchecked use of AI poses many harms to workers and worker power. 

AI is enabling new and powerful surveillance technologies that employers are using to disrupt union organizing efforts while invading workers’ privacy. While workplace monitoring is not new, AI can supercharge union busting by enabling employers to process vast amounts of employee data in real-time and predict organizing behavior before it becomes visible through conventional means. At Amazon facilities in Bessemer, Alabama and Saint Peters, Missouri, the company allegedly used AI-powered software and algorithms to monitor workers’ posts on social media and company message boards, and to track workers through workplace devices and cameras. Worker organizers and their supporters report feeling intimidated by the company’s surveillance practices. Companies like Delta, Starbucks and Walmart reportedly use AI-powered software to monitor employees’ communications on workplace messaging applications. In workplaces like these where monitoring is ubiquitous, surveillance technology may suppress organizing campaigns before they can even take root.

Employers are also deploying algorithmic management systems to make determinations about hiring, firing, promotions, scheduling and pay. For example, Chipotle has reportedly used a “conversational AI” system named “Ava Cado” to prescreen applicants and conduct initial interviews. Gig economy platforms like Uber and DoorDash have been accused of using dynamic pricing algorithms to pay workers different rates for the same task and increase their own profit at workers’ expense. Employers also use scheduling algorithms to assign shifts in ways that maximize efficiency for the company but create instability and exhaustion for workers, all while keeping the decision-making process hidden from those most affected. These systems erode worker power by shifting responsibility for decisions away from human managers and into opaque algorithmic black boxes that cannot be questioned, negotiated with, or held accountable. 

Further, many of the companies developing AI products are entrenching and exercising their power in ways that undermine democracy. A recent report by the International Trade Union Confederation highlights how Amazon, Meta, Palantir, and other major tech companies are engaging in pervasive “anti-democratic behaviour” by aligning themselves with far-right political forces, supporting efforts to roll back workplace democracy, consolidating their power within industries and expanding military uses of their technologies. Palantir, in particular, has seen explosive financial growth, recently surpassing $1 billion in quarterly revenue while securing major contracts with U.S. defense and intelligence agencies. At the same time, Elon Musk’s meddling in federal agencies through the so-called Department of Government Efficiency (DOGE) underscores how unelected billionaires are increasingly shaping both the direction of AI policy and the allocation of public resources.

Many of these same corporations are investing hundreds of millions of dollars into lobbying with the goal of ensuring that the AI development and use remains under-regulated. These investments are already paying dividends with Silicon Valley successfully delaying the implementation of Colorado’s AI Act while defeating several key AI safety bills in California

Taken together, these examples show how the unchecked use of AI is already reshaping power in the workplace and politics in corporations’ favor – making it all the more urgent for workers to establish guardrails where lawmakers have so far failed to act. 

Immo Wegmann Vi1 Hx Pw6hyw Unsplash (1)

Through Collective Bargaining, Workers are Establishing AI Guardrails

The federal government has yet to pass comprehensive laws that protect workers from workplace AI harms. Instead, AI protections are scattered across a patchwork of state and local laws that only cover certain industries or geographies. While some harmful uses of AI may already be illegal under existing laws that protect against union busting, workplace surveillance, or discrimination, labor law enforcement is weak and under-resourced, which leaves workers with limited recourse when employers deploy AI in ways that violate their rights.

In the absence of clear, enforceable standards, collective bargaining has emerged as the most effective way to govern how AI is used in the workplace.

The 2023 strikes, and subsequent collective bargaining agreements, by the Hollywood screen actors and writers showed how workers can set standards that limit employers’ ability to use AI to replace workers. These agreements established important precedents by requiring consent and compensation when AI is used to replicate performers’ likeness or when their past work is used to train AI systems. Dockworkers on the East Coast and Gulf ports also won critical protections against AI-driven layoffs in their most recent collective bargaining agreement. The deal requires employers to negotiate with the union before introducing automated equipment at terminals, ensuring that new technologies cannot be unilaterally imposed in ways that threaten jobs or safety. While these agreements do not eliminate the threat of AI-driven layoffs, they show how collective bargaining can establish meaningful guardrails that prevent AI from replacing human labor without workers’ input or consent.

Workers are also negotiating over how AI systems are used to manage their work. One notable example is the collective bargaining agreement between UPS delivery drivers, represented by the International Brotherhood of Teamsters, which prevents the company from disciplinging workers based solely on data collected through tracking technologies. In Pennsylvania, state government workers represented by SEIU Local 668 won a contract earlier this year that includes the creation of a worker board to oversee the implementation of generative AI (GenAI) tools at work. In the last two years, workers in a range of industries including news reporters, opera performers, hospitality workers, theatrical and stage employees, and workers in aerospace manufacturing facilities have successfully bargained with their employers to secure safeguards against uses of AI that devalue workers’ labor. 

These examples show how workers are winning a seat at the table in governing how these technologies are implemented at work while guarding against uses of AI that harm workers’ safety, dignity, or livelihoods. 

Union Representation Empowers Workers to Shape AI Policy

Beyond negotiating individual contracts, unions are also giving workers a voice in shaping AI policy. 

At the national level, unions have endorsed and promoted worker-centered AI bills such as the “Stop Spying Bosses Act” and the “No Robot Bosses Act” that would establish such guardrails on employers’ use of AI-powered management and surveillance systems. Drawing on input from workers and unions across industries, the AFL-CIO – the nation’s largest labor federation – has also launched a “Workers First Initiative on AI,” which sets out a policy agenda and guiding principles to help lawmakers craft AI regulations that center safety, transparency, and fairness. 

Unions are also providing workers a voice to shape AI policy at the state level. In California, a coalition of labor unions representing more than two million workers statewide sponsored three pieces of legislation that would put limits on how companies can use AI-powered monitoring systems to manage workers. While none of these bills have become law – with Governor Gavin Newsom vetoing the state’s version of the “No Robo Bosses Act” – their introduction shows how unions are pushing state legislatures to confront the risks of AI head-on, setting the stage for future efforts that could succeed as public concern about workplace surveillance and automation continues to grow. The state has enacted several worker-protective AI regulations, including a policy under the Fair Employment and Housing Act (FEHA) that explicitly protects workers from unfair employment outcomes made by automated decisionmaking systems as well as additional whistleblower protections for employees at AI companies who raise safety or rights-related concerns.

With corporate lobbying groups pouring money into weakening or delaying AI regulations, organized labor provides workers a seat at the table, pressing lawmakers to balance innovation with dignity, fairness, and accountability in the workplace.

AI Offers New Tools to Strengthen Worker Power

Workers can bargain over AI only when they are able to successfully organize unions in the first place. Recent books like Hamilton Nolan’s The Hammer and Eric Blanc’s We are the Union have challenged the labor movement to think differently about how they approach new member organizing in order to reverse the decades-long decline in union density. Reversing that decline will require not only new strategies for reaching workers but also new tools – and AI is beginning to emerge as one of the most promising resources for rebuilding union strength.

For years, unions, worker centers, and advocacy groups have built AI-powered tools that make information about workers’ rights more accessible to workers. In 2016, worker advocacy group United for Respect developed WorkIt, an AI-powered chatbot that enabled workers to ask and receive answers to questions about their legal rights and workplace policies via a mobile app. Originally created for Walmart workers, the technology was adapted by unions and worker centers to support retail workers in Washington state, teachers in Texas, and domestic workers in Los Angeles. Last year, SEIU Local 503, which represents 72,000 public servants and care providers in Oregon, worked with The Workers Lab to implement a GenAI chatbot that workers can use to ask questions about their benefits and workplace health and safety provisions outlined in their collective bargaining agreements. As information about workers’ rights is often presented in complex and difficult to understand policy documents or collective bargaining agreements, tools like these show how AI can help workers to quickly understand the protections and working conditions to which they are entitled, empowering them to identify violations, assert their rights, and recognize shared grievances that can form the basis for collective action and stronger organizing efforts.

Unions are now beginning to experiment with using GenAI to directly support labor organizing campaigns. The emergence of free-to-use large language models like ChatGPT, Claude, and Gemini is democratizing access to what was once prohibitively expensive and complex technology. This makes these powerful tools available to unions of all sizes. 

Just as GenAI has quickly become part of the toolkit for improving messaging and get-out-the-vote efforts in political campaigns, unions are using GenAI to improve how they reach and engage current and potential members. The UK Public and Commercial Services Union (PCS), for example, has developed an AI-powered conversation simulator that allows union representatives to practice recruitment conversations with realistic virtual colleagues. This kind of tool can help new organizers build confidence and refine their approach before engaging in high-stakes conversations with workers who are considering joining the union. 

Screen Shot 2025 10 23 at 3.49.04 Pm

Photo from The Centre for Responsible Union AI

Beyond helping to practice organizing conversations, some other potential use cases including using GenAI to:

  • Translate union campaign outreach materials into dozens of languages

  • Draft messages tailored to specific industries, communities, or types of workers

  • Analyze employer policies and compare them to relevant laws or collective bargaining agreements to identify potential violations

  • More efficiently process and draw insights from member survey data, meeting transcripts, interviews, or other large amounts of information

  • Gather public information about targeted employers to seek vulnerabilities and create organizing opportunities and leverage

  • Brainstorm creative organizing tactics and campaign strategies

By lowering the barriers to effective organizing, training, and communication, AI has the potential to strengthen unions' capacity to reach workers at scale. This is not about replacing organizers with AI, but rather, helping staff and worker organizers work more efficiently and do more with less. These tools can help under-resourced organizing departments run sophisticated campaigns that once required large staff and budgets, leveling the playing field between workers and corporations.

Building the Skills to Harness AI Responsibly

However, realizing AI's potential as an organizing tool requires unions to invest in training organizers to use these technologies responsibly and effectively. Training programs can help organizers identify appropriate use cases, understand AI's risks and limitations, and develop protocols for protecting data about their members and other sensitive information while still benefiting from AI's capabilities. Some unions are already starting to offer AI training to their members. The American Federation of Teachers (AFT), for instance, has launched an AI training initiative to help educators understand how AI works and how it can be used in the classroom. Training initiatives tailored specifically for union staff and member organizers could equip the labor movement with the knowledge needed to harness AI's potential to enhance organizing.

Conclusion

AI's impact on workers and democracy is not predetermined – it depends on who controls these powerful technologies and how they are deployed in the workplace. Left unchecked, AI will continue to serve as a tool for surveillance, union busting, and the erosion of workplace protections, while tech companies use their political influence to block meaningful regulation. 

But the examples of workers bargaining for AI guardrails, unions shaping AI policy, and organizing campaigns leveraging AI tools demonstrate an alternative path forward – one where workers can establish boundaries on harmful AI uses, demand transparency and accountability, and even turn these powerful technologies into tools for building greater worker power. The challenge ahead is not simply to regulate AI, but to ensure workers have the ability to meaningfully influence how these systems are designed, deployed, and governed.