The window just closed
The gap between a vulnerability being disclosed and being exploited in the wild has collapsed to as little as 24 hours. That is not a theoretical projection. It is what Flashpoint’s threat intelligence team documented in April, alongside a 1,500% surge in illicit AI-related discussions on criminal forums between November and December 2025.
Ian Gray, Vice President of Intelligence at Flashpoint, put the mechanism plainly: “Tasks like analyzing large codebases or identifying exploitable weaknesses, which previously required significant time and expertise, can now be done faster and at greater scale.”
For a New Zealand business patching on a monthly cycle, this is not a gap in best practice. It is an open door.
Google is watching attackers use its own tools
Google’s Threat Intelligence team confirmed in late April that AI-powered attacks have moved from experimental to operational. Attackers are using AI for scale, speed, and sophistication simultaneously. One China-based actor was caught attempting to use Google’s own Gemini model for attack planning, framing malicious queries as a capture-the-flag exercise.
Sandra Joyce, VP of Google Threat Intelligence, was direct: “For organisations that have not patched and have not done very good patch management, they’re going to have a real problem as these tools become better and better at scanning for vulnerabilities.”
Google responded by deploying its own agentic AI capabilities for dark web analysis, achieving 98% accuracy versus a 90% false positive rate with traditional methods. The defenders are tooling up. The question is whether businesses are keeping pace.
Zero-day chaining broke four assumptions at once
NCC Group CEO Mike Maddison described Anthropic’s Claude Mythos capabilities as a step-change in the cyber risk landscape in April: “We’ve seen clear evidence that AI can identify, chain and exploit zero-day vulnerabilities across major operating systems and browsers.”
He listed four assumptions now broken. Code written decades ago is potentially exploitable by AI. Vulnerability discovery is no longer constrained by human review cycles. Responsible disclosure timelines designed around human research no longer reflect reality. And the accepted risk window to address vulnerabilities has shrunk dramatically.
The seriousness registered at the highest levels. The US Treasury Secretary and Federal Reserve Chair convened an urgent meeting with Wall Street leaders after being briefed on what Mythos could do.
New Zealand’s numbers are already bad and getting worse
The NCSC’s 2025 Cyber Threat Report recorded 5,995 incidents in 2024/25 with direct financial losses of $26.9 million. Criminally motivated incidents of national significance more than doubled to 137. The quarterly trajectory is steeper: Patrick Sharp, General Manager of Aura Information Security at Kordia, cited $12.4 million in direct financial loss in Q3 2025 alone, up 118% from the previous quarter.
The share of NZ cyber-attacks exploiting AI vulnerabilities rose from 6% in 2024 to 14% in 2025. State-sponsored actors including Salt Typhoon have been documented operating in NZ telecommunications infrastructure.
Most firms are still running a pre-AI defence model
In 2025, Kordia surveyed 295 NZ businesses with 50-plus employees and found 59% had been subjected to a cyber-attack or incident. The defensive posture was alarming: 67% had not performed a penetration test in the past 12 months, 20% did not monitor or log network activity, and 26% had no cyber security awareness programme.
HackerOne’s research found 89% of organisations lack comprehensive testing for AI systems. Those operating without it face 70% higher annual remediation costs. Security researcher Luke Stephens was blunt: “These aren’t sandboxed toys. They’re hooked into real data, real APIs, real decision-making. When something goes wrong, it doesn’t stay contained.”
Meanwhile, Tenable found 82% of organisations run workloads with critical CVEs already being actively exploited and 65% have ghost secrets, unused credentials with administrative privileges sitting unrotated in cloud environments.
This is now a governance question, not an IT budget line
The era of treating cyber security as something the IT team handles between helpdesk tickets is over. When the exploitation window is 24 hours, patch management is a board metric. When 24% of NZ businesses rank staff misuse of AI among their top three cyber security challenges, shadow AI is a data governance crisis. When NZ’s legislative framework lags peer jurisdictions, businesses cannot rely on regulation to set a minimum standard.
The NCSC’s report is explicitly addressed to leaders making strategic decisions. Boards that still treat cyber as a cost centre rather than a risk function are betting the business on attackers being slower than their patching cycle. That bet no longer pays.
Sources
- Google flags urgency as AI reshapes cyber threats (2026-04-28)
- AI tools widen cyber attack threat, Flashpoint warns (2026-04-25)
- AI vulnerability discovery forces boards to rethink cyber risk (2026-04-22)
- Threat of AI cyber-attacks a top concern for NZ businesses (2025-03-10)
- Shadow AI misuse emerges as key cyber threat in NZ (2026-03-10)
- HackerOne warns of widening AI security and testing gap (2026-03-13)
- Tenable warns AI outpacing security, widening risk gap (2026-02-20)