Aussie unions want a say in the AI that gets deployed in Australia
The ACTU reckons Australian workers should be consulted on every AI system rolled out in not only their workplaces (fair enough) but also in the whole country (hold on just a minute). While the principle is sound, practicality is another question.
Australian unions are pushing for a seat at the table on every artificial intelligence system deployed in Australian workplaces, arguing that the current pace of rollout is producing botched implementations, displaced workers, and de-humanising conditions that could be avoided if employees were consulted from the start.
Microsoft took part in a big AI summit in Australia this week, bringing together a load of different sectors to talk about how good the robots would be for business, productivity and the economy. The summit builds on a Memorandum of Understanding and Framework Agreement signed by the ACTU, the Australian Services Union, Professionals Australia, the Shop Distributive and Allied Employees Association, and Microsoft Australia in January.
For ACTU Assistant Secretary Joseph Mitchell, the event was evidence that engagement is possible, and a marker for what the rest of the sector should follow.
“Today’s summit was a positive demonstration of engagement between Microsoft and unions,” Mitchell said. “Global tech leaders, managers and developers heard directly that for AI to benefit workers and have a positive impact, workers must be involved right from the start, our expertise respected and skills developed along the way.”
Too much had already gone wrong when it comes to AI, however – according to Mitchell – which is why the union wants even more of a say in how AI is used in Australia.
“There have been too many botched AI implementation projects at Australian companies already, resulting in job displacement, theft of creative and media output, through to work intensification and ultimately de-humanising conditions in some workplaces.”
The summit itself heard accounts from workers across multiple sectors of job displacement, theft of creative and journalistic work, intensification of duties, and what the ACTU describes as systems that discriminate because of engineered bias. Those concerns are not unique to Australia, but the unions argue Australian workers deserve direct input into how AI is designed and deployed locally rather than receiving systems built and tested elsewhere.
The principle is sound. The mechanics are harder.
The ACTU’s position is that meaningful consultation is not optional, and that the cost of skipping it is borne by workers and ultimately by the businesses that deploy the technology poorly.
“Unions are serious about the need to do better,” Mitchell said. “We want other large corporations to join us in working together to secure positive outcomes. Employers must consult meaningfully with workers and their unions and take a collaborative approach to developing the best possible training and skills support where that’s needed.”
“Workers must be able to contribute to AI system design in their workplaces, or employers risk more ill-conceived project failures borne from enforced change that risks significant reputational damage.”
The principle is hard to argue with. The mechanics are harder. The pace at which new AI models are released, updated and integrated into business software has compressed from years to months to, in some cases, weeks. The major model developers — OpenAI, Anthropic, Google, Microsoft — are competing not on careful rollout but on capability gains, with billions of dollars of infrastructure spending committed over the next 36 months to support the next generation of frontier models.
That tension is one the ACTU’s framework with Microsoft attempts to navigate. Rather than vetting each model release, the agreement establishes ongoing channels for worker input into how AI systems are designed and deployed, alongside joint work on public policy and skills training. It is a structural commitment rather than a transactional one. Whether that structure can hold against the speed of underlying model development is the live question.
The unions’ broader concern, articulated in commentary by ACTU figures in recent weeks, is that the global model developers themselves are not weighing the wider social impacts of new releases against the commercial incentive to ship first. Anthropic’s Mythos and OpenAI’s GPT-5.4-Cyber have both been previewed to select American companies and government bodies under tight access conditions, with limited transparency for foreign jurisdictions about what those models can do or what guardrails are in place.
Australia signed a Memorandum of Understanding with Anthropic less than two weeks ago. Whether the National AI Centre, the AI Safety Institute or Home Affairs have been granted access to pre-release models for testing has not been publicly disclosed.
The union argument is that this is not a position Australia should accept by default. Australia is, by any reasonable measure, an attractive market for AI developers — a stable democracy, a highly skilled workforce, abundant high-quality data, and the land, energy and people needed to host the data centres these models depend on. The unions’ position is that this leverage is being underused.
The counter-case from industry is that excessive regulation slows adoption, and that Australia risks falling behind comparable economies if it imposes consultation requirements that the United States and Europe do not. That argument has merit. It also has a familiar ring – the same case was made against the social media ban, and was overridden.