Steve Holt has applied for warehouse roles at Woolworths six times. He is 51, a solo parent homeschooling a neurodivergent son, and has never spoken to a human at any point in the process. Each time, the Sapia AI platform rejected him based on a short text conversation. His summary: “I did not pass the AI bouncer.”
He is not alone. A 16-year-old applicant was told his “self-belief” could “alienate” people. A dyslexic, colourblind teenager received AI-generated feedback saying he would struggle with distractions and didn’t like trying new things. His mother, Louise Hinton, called the process “just lazy, soul-destroying.”
This is not a human interest story. It is a governance failure playing out in real time across some of New Zealand’s largest employers.
The $5,000 trap
The economics of AI hiring tools have shifted dramatically. The AI Forum’s Third AI Productivity Report found three-quarters of New Zealand organisations now set up AI tools for under $5,000, down from nearly a third spending over $50,000 just a year ago. Ninety-one percent report efficiency improvements and 77% have reduced operational expenses.
For a retailer processing thousands of applications for entry-level roles, the appeal is obvious. Woolworths has used Sapia since 2020. McDonald’s has adopted similar tools. The problem is that the price tag covers the software, not the risk.
Setup costs have collapsed. Legal and reputational exposure has not been priced into the purchase.
Personality profiling is where the liability sits
Gerard Hehir, Unite Union’s assistant national secretary, draws a distinction employers should pay attention to. He accepts AI for objective, measurable screening – does the applicant hold the right licence, the correct visa? “If it’s used to assess hard, measurable criteria, no, not a problem.”
But personality assessment by chatbot is different. “When it’s making evaluations like what’s your emotional response to a question or whether you sounded a bit stressed or depressed or something like that, that is a major problem, I think it is dehumanising.”
Hehir also pushes back on the vendor marketing that AI removes bias. “Time and time again over recent years we have seen that of course the processes themselves often reflect the biases of those that wrote them,” he told RNZ. “Far from actually removing the bias, they reinforce or even amplify the bias.”
That is the crux. Using AI to check whether someone has a forklift licence is defensible. Using it to assess a 16-year-old’s emotional maturity from a text chat is not.
The legal exposure nobody has stress-tested
Under New Zealand’s Human Rights Act, personality profiling that disproportionately disadvantages applicants on the basis of age, disability, or ethnicity could constitute unlawful discrimination. The cases already in the media, a 51-year-old rejected six times and a dyslexic teenager penalised for traits linked to his disability, are textbook fact patterns for a Human Rights Commission complaint.
New Zealand has no equivalent to the EU AI Act, which classifies employment AI as high-risk and mandates transparency, human oversight, and documentation. That regulatory vacuum cuts both ways. There is no compliance checklist, but there is also no safe harbour when things go wrong. BusinessDesk has positioned AI hiring as a practice with established legal and reputational risks, not theoretical ones.
“The vendor said it was fair” will not hold up as a defence.
Adoption is outrunning trust
Matthew Sellers, writing for HRMonline NZ, captures the broader dysfunction. Eighty-four percent of New Zealand knowledge workers already use generative AI, but only 47% of employers encourage it. The same governance gap is playing out in hiring: tools deployed before frameworks exist.
Meanwhile, only 44% of New Zealanders believe AI benefits outweigh risks, with particular scepticism among Maori and Pasifika communities, the same demographics most likely to be applying for the entry-level roles AI now screens.
The labour market makes this worse. Filled jobs fell 0.5% year-on-year to September, with construction down 4.5% and professional services down 2.6%. When applicants have fewer alternatives, automated rejection carries higher stakes, and the reputational cost of a rejected candidate going to the Herald is higher too.
Five questions before procurement signs the next contract
Employers considering AI screening tools should be asking: Is the tool assessing objective criteria or subjective personality traits? Has the vendor provided validation data against protected groups under the Human Rights Act? Is there a human review step before any rejection? Can the employer explain in plain language why a specific applicant was rejected? And what is the plan when a rejected candidate goes public?
If the answer to any of those is “we haven’t checked,” the tool is cheaper than the employer thinks, and the liability is larger. PwC’s AI Jobs Barometer found skills required for AI-exposed roles are changing 66% faster than for other positions. A personality filter trained on historical data is likely screening out exactly the adaptive candidates the role now requires.
The first Employment Relations Authority or Human Rights Commission case involving AI hiring in New Zealand is not a question of if. It is a question of which employer failed to ask these questions first.
Sources
- NZ Herald: More complaints arise about Woolworths’ use of AI personality analysis in job interviews
- RNZ: Jobseekers and advocates disturbed as companies screen applications with AI
- 1News: Jobseekers disturbed as companies screen applications with AI
- AI Forum NZ: AI in Action – Key Findings from New Zealand’s Third AI Productivity Report
- BusinessDesk: AI-assisted hiring – where businesses win and where they get caught out
- HRMonline NZ: Kiwi workers are using AI they don’t trust
- Stats NZ: Employment indicators September 2025
- PwC: 2025 AI Jobs Barometer – New Zealand Analysis