April 8, 2026

Who decided employers could reject you by algorithm without explanation?

Two professionals conduct a virtual job interview using laptops in a modern office.

The AI bouncer is already working the door

One New Zealand company screened more than 1,000 candidates by AI in a single month. That is not a pilot. That is production. Woolworths has been running Sapia AI, an Australian-based hiring platform, since 2020. McDonald’s uses AI screening too. The complaints now surfacing are not teething problems. They are the predictable output of systems running at scale for years without meaningful governance.

The business logic is real. AI-enhanced recruitment can cut a four-week hiring process by 75% and reduce costs by 90%. CV sifting drops from 23 hours to five minutes. For companies receiving thousands of entry-level applications, human review of every submission is fantasy. Nobody serious is arguing employers should ignore these tools.

But efficiency without governance is just liability on a timer.

Experienced workers rejected, teenagers psychoanalysed

Steve Holt, a 51-year-old warehouse worker and solo parent with decades of relevant experience, was rejected six times by Woolworths’ AI system without ever speaking to a human. “I did not pass the AI bouncer,” he said. A 16-year-old dyslexic and colourblind applicant received unsolicited personality feedback telling him he would struggle with distractions and didn’t like trying new things. His mother called it “soul-destroying.” Another teenager was told his self-belief could “alienate” people.

These are not edge cases. They reflect what happens when AI moves from screening objective criteria (does the applicant have a driver’s licence?) to subjective personality assessment (how does the applicant handle stress?). Gerard Hehir, Unite Union’s assistant national secretary, puts it plainly: “No one actually knows, at the heart of it, an AI system, how it actually makes a decision.” He warns the bias-elimination promise has not been delivered, with systems that “far from actually removing the bias, they reinforce or even amplify the bias.”

The governance gap is wider than you think

New Zealand has no specific legislation governing AI use in hiring decisions. The Human Rights Act and Privacy Act offer some protection, but neither was designed for algorithmic personality assessment. There is no right of explanation for rejected candidates, no mandatory disclosure that AI was used, and no audit requirement. BusinessDesk’s analysis frames AI hiring under its law and regulation coverage, a signal that legal risk is accumulating.

The EU’s AI Act classifies employment and recruitment AI as high-risk, requiring transparency, human oversight, and the right to explanation. New Zealand has no equivalent. MBIE’s 2025 AI Strategy focuses on investment confidence and productivity. The Algorithm Charter for Aotearoa commits signatories to fair and transparent algorithmic use, but it is voluntary for private employers.

Meanwhile, the cost barrier has collapsed. 75% of New Zealand organisations now report AI setup costs under $5,000, down from a third previously spending over $50,000. The tools Woolworths adopted in 2020 are now within reach of any employer with a hiring problem and a credit card.

Trust is already thin

The adoption-governance gap is not just a compliance risk. It is a talent risk. Only 44% of New Zealanders believe AI benefits outweigh risks. Only 21% trust what AI produces most or almost all of the time. HCAmag’s analysis found the gap between AI adoption and governance is wider in New Zealand than in the United States.

New Zealand’s labour productivity growth has averaged just 0.5% annually over the past decade, below the OECD average of 0.9%. AI adoption done well is a genuine lever on that problem. AI adoption done carelessly, screening out experienced workers and delivering personality verdicts on teenagers with no human review, is a lever on something else entirely.

What employers should be doing now

The question is not whether to use AI screening. The competitive pressure is real and the efficiency gains are documented. The question is whether your governance is keeping pace with your adoption.

Any employer running or considering AI hiring tools should be asking five questions. Are candidates told AI is screening them? Is there a human review pathway for rejections? Has the vendor provided bias audit evidence? Is the AI limited to objective criteria or assessing personality? And can the employer explain how a hiring decision was made?

If the answer to any of those is no, you are accumulating risk you cannot see. Woolworths is the cautionary tale, but with 87% of large New Zealand organisations now running AI in operational environments, the next complaints will not come from a supermarket. They will come from everywhere.

Sources

Subscribe for weekly news

Subscribe For Weekly News

* indicates required