876 investigations and counting
NZQA investigated 876 reported breaches of NCEA external assessments in 2024, of which 738 were substantiated. That is a 250% increase on the 345 breaches recorded in 2019. Of those investigations, 418 resulted in formal action, ranging from warnings to results being withheld entirely.
For the first time, NZQA tracked AI as a distinct breach category, recording 59 AI-linked investigations. That number looks modest. It is not. Onslow College deputy principal Michael Bangma described detected cases as “just the tip of the iceberg, with many more undetected instances in every school”.
The breakdown tells its own story: 206 investigations involved authenticity concerns, 71 involved students navigating away from the digital exam platform to access external resources, and 13 showed sudden text increases consistent with copy-paste behaviour. Mt Albert Grammar principal Patrick Drumm captured the pace of the problem: “You think you can set up a policy around it, but it’s gone by lunchtime.”
The real exposure sits in the 80% nobody monitors
Those 876 breaches cover only external assessments, the supervised exams. NZQA chief executive Grant Klinkum told the Education and Workforce Select Committee that internal assessment accounted for 80% of NCEA credits in 2024. Coursework, reports, essays completed outside supervised conditions are precisely the category most exposed to AI-generated submissions and the hardest to police.
Klinkum was unusually candid about what this means: “If you think about the risks of AI-generated assessment in the internal context, you might hope for a different balance between internal and external, because one reason for external assessment is that it helps triangulate the results that are achieved internally.” Translated from bureaucrat: the system has a structural design flaw and NZQA knows it.
The Ministry of Education has made one concession, removing reports as a valid assessment form for externally assessed Level 1 standards from 2025. A single adjustment to a single level. The 80/20 imbalance remains untouched.
Schools are improvising because nobody is leading
Without national guidance, teachers are freelancing. Auckland English teacher Kit Willett found 15-20% of students had misused AI early in 2024. His school reverted to handwritten first drafts, and detected plagiarism dropped to about 5%.
But students adapt faster than the adults. Saint Patrick’s College Wellington head of science Doug Walker runs an AI detector over all computer-based work, only to find students now physically retype AI-generated responses to avoid detection. Westlake Girls’ High School teacher Susana Tomaz, a UNESCO Fellow on AI in Education, is blunt about the detection arms race: “AI detectors do not work and there’s a lot of false positives.” Hamilton teacher Benny Pan identifies the governance vacuum: “We don’t have national guidance and because we don’t have that, students feel like they can do whatever they want.”
The bigger threat is invisible and perfectly legal
The breach statistics measure detected misconduct. The more corrosive problem is what Tomaz calls “cognitive offloading”, students using AI to produce work without building any underlying understanding. She cites OECD research showing students given unrestricted AI access performed 48% better during practice but 17% worse on independent exams. Performance rises while learning declines.
That is the employer’s problem in a single data point. A student can achieve a strong NCEA result with AI assistance, graduate, and enter the workforce genuinely unable to perform the tasks their qualification implies they can.
Deloitte AI lead Dr Amanda Williamson makes the connection explicit: AI will “validate a wrong answer, reinforce a bad assumption and, by default, tell you your work is excellent when it is not”. The critical skill employers need from young hires, knowing when to trust AI output and when to challenge it, is precisely the instinct that atrophies when AI does the thinking throughout school.
Qualifications that stop signalling capability
Tomaz frames the stakes in economic terms: “If we are serious about lifting productivity, meeting GDP targets, and reducing reliance on imported expertise, AI readiness cannot stop at industry.” She warns that without coordinated national action, AI risks compounding existing inequities, as well-resourced schools redesign assessments while others cannot.
Meanwhile, NZQA is deploying AI for its own processes. From May 2025, automated scoring will handle Year 10 writing assessments, with a trial across 36,000 learners’ work showing 80% agreement with human markers. Klinkum insists AI will act as a team member, not a replacement. That is a defensible position, but it sits awkwardly alongside 59 AI-linked breach investigations in the same year.
Business owners hiring school leavers should be paying close attention. The students entering the workforce over the next two years are the first cohort to have had easy access to generative AI throughout their senior schooling. If NCEA cannot credibly certify what they know, employers will have to test for it themselves. That is an assessment failure being quietly outsourced from the education system to the private sector.
Sources
- NZ Herald: AI-linked breaches contribute to NCEA exam misconduct rise
- NewstalkZB: AI-driven exam breaches surge as schools grapple with cheating
- RNZ: Schools abandon take-home assignments after AI used to cheat
- RNZ: Students using AI to cheat on assessments, teachers warn
- RNZ: AI exam-marking on the way for Year 10 writing tests
- NZ Herald: AI-driven exam breaches surge as schools grapple with cheating
- EdTechNZ: Redefining the purpose of education in the age of AI
- NZ Herald: How generative AI is reshaping NZ education