The breach is happening in real time
This is not a hypothetical risk or a future regulatory challenge. It is a present-tense legal violation playing out across thousands of customer interactions every day.
Chapman Tripp partner Tim Williams demonstrated the problem live to over 450 Insurance Brokers Association of New Zealand members, showing how popular AI tools recommended specific life insurance policies and named exchange-traded funds as investment options. His conclusion was blunt: when an AI tool strays into recommending a specific product, designing an investment plan, or providing specific types of financial planning, ‘it is clearly giving investment advice, and that is in breach of New Zealand law as it is currently written.’
No AI chatbot holds a licence as a Financial Advice Provider. Not one.
Josh Daniell, co-founder of open banking platform Akahu, told the NZ Herald in April that AI finance apps are rapidly evolving beyond simple tracking into making recommendations and executing transactions. He acknowledged frankly that ‘a lot of regulation is actually being breached’ as chatbots attempt to personalise recommendations. Insurance and Financial Services Ombudsman Karen Stevens warned in April that AI tools are already hallucinating and giving bad or incorrect advice, making the complaints process more frustrating when reality does not match the expectations AI has raised.
The regulatory gap that made this worse
The timing is terrible. The earlier regulatory accommodation for digital advice, the Financial Advisers (Personalised Digital Advice) Exemption Notice 2018, expired on 31 May 2023. That notice had allowed eight named providers including Sharesies, Koura Wealth, and Kiwi Wealth to operate automated advice under specific conditions. No equivalent framework has replaced it, leaving a vacuum precisely as generative AI has proliferated.
Meanwhile, the regulatory framework around financial advice has tightened. The Code of Professional Conduct for Financial Advice Services Version 2 came into force on 1 November 2025, establishing nine standards covering suitability, client understanding, and information protection. There is no AI carve-out. The CoFI regime came into force on 31 March 2025, adding fair conduct obligations. The FMA’s enforcement record shows it will use these powers: Westpac was ordered to pay $3.25 million and AA Insurance $6.175 million in penalties for fair dealing breaches in the 2024/25 year.
It is not just fintechs who are exposed
The exposure extends well beyond robo-advisers. Any business using an AI chatbot, product comparison tool, lead-generation widget, or customer communication system that touches financial products is potentially in scope. Insurance, KiwiSaver, mortgages, investments: if your AI tool moves beyond factual information into product recommendations, you have a problem.
IBANZ CEO Katherine Wilson noted in December 2025 that ‘the advice available via AI tools can be of questionable quality’ and raised concerns about ‘potential harm inaccurate or misleading advice could cause.’ Insurance is particularly dangerous territory. Policy definitions, exclusions, and underwriting criteria are rarely straightforward. An AI tool that oversimplifies a policy recommendation creates a claims dispute waiting to happen.
For licensed Financial Advice Providers already using AI in their practice, Williams warned they need to demonstrate reasonable reliance and must depersonalise client data before feeding it to AI systems. Failure to do so could breach both the FMCA and the updated Code of Professional Conduct.
‘The algorithm did it’ will not save you
MBIE’s Responsible AI Guidance, issued in July 2025, is voluntary but unambiguous: businesses cannot rely on ‘the algorithm did it’ as a defence and must maintain human oversight of AI-generated outputs. Buddle Findlay’s January 2026 analysis noted that 82 to 87 percent of Kiwi organisations use AI but only 34 percent of New Zealanders trust it. That trust deficit will amplify the reputational damage when a high-profile AI advice failure reaches the courts. As the firm warned, ‘It could end up costing significantly more in lost business and reputation-building afterwards than just the cost of the lawsuit.’
The FMA’s current posture is engagement-first. In a May 2025 speech to KiwiSaver providers, Executive Director Kari Jones acknowledged AI’s potential but asked pointed questions about governance: ‘How do you explain decisions when the logic isn’t always linear? How do you audit AI-generated language or imagery?’ The FMA has established a regulatory sandbox pilot and pointed to the UK’s FCA as a model. But the FMA’s 2024 FAP Industry Snapshot shows only 36 FAPs were providing licensed digital advice to 86,519 retail clients. The gap between that licensed market and the actual volume of AI-driven financial interactions is enormous.
What to do before the enforcement catches up
The practical steps are not complicated. Audit every customer-facing AI tool that touches financial products. If any of them move beyond factual information into recommendations, you are likely in breach today. If you are a licensed FAP, document your human oversight processes now. And watch the FMA’s enforcement activity closely. The regulator’s preference for industry-led governance is a window, not a permanent position. The penalties for getting this wrong already run into the millions, and that is before the reputational cost of becoming the test case.
Sources
- Good Returns: AI financial advice maybe breaching the law (2025-12-11)
- Covernote: Warning raised about unregulated AI financial advice (2026-03)
- NZ Herald: AI finance apps set to disrupt advice sector and automate money decisions (2026-04-20)
- RNZ: Warnings AI tools could be giving out bad or incorrect advice (2026-04-16)
- FMA Annual Report 2024/25 (2025-06-30)
- Code of Professional Conduct for Financial Advice Services 2025 (2025-11-01)
- FMA: Speech to KiwiSaver Providers on AI (2025-05-29)
- FMA: FAP Industry Snapshot 2024 (2024)