Selling houses from a war zone
On the morning of Anzac Day, Donna Hewitt of Connect Realty Queenstown posted an AI-generated image to social media showing herself in military gear alongside US President Donald Trump, military helicopters, and a war-zone backdrop. A sold sign bearing her branding completed the scene. The accompanying text claimed she had sold six properties “in the midst of the Middle East conflict”.
The post stayed up for several hours before being deleted. Online reaction was brutal, with Reddit users calling it “colossally stupid” and “so cringe”. Hewitt apologised, saying the image had been “shared among other AI-generated content from unknown sources” and describing it as a “total oversight”.
That last phrase is the real story. Not the tastelessness, not the timing. The fact that an AI-generated image depicting a real person alongside a sitting US president, set in a fictional war zone, was published without anyone reviewing it first.
87% adoption, 74% with no plan
Hewitt is not an outlier. She is the visible edge of a structural gap between AI adoption and AI governance across New Zealand business.
According to Datacom’s 2026 survey, 87% of NZ organisations now use some form of AI, with 91% reporting efficiency gains. The technology is mainstream. The guardrails are not. A Microsoft survey found 74% of NZ leaders worry their organisation lacks an AI plan and vision.
The government’s July 2025 AI Strategy acknowledged the knowledge gap, citing a finding that only 28% of businesses had a good understanding of the legal and ethical implications of AI. Its response was deliberately light-touch: no new prescriptive regulation, reliance on existing frameworks like the Privacy Act and Fair Trading Act, and voluntary principles.
That approach only works if businesses actually self-govern. A war-zone marketing post suggests the honour system has limits.
The regulator warned this sector specifically
What makes the Hewitt case particularly sharp is that real estate had already been told. In November 2024, the Real Estate Authority issued formal generative AI guidance to the sector. Then-Chief Executive and Registrar Belinda Moffat stated that “careful human oversight is vital to enabling the safe and responsible use of Gen AI in real estate agency work.”
The guidance was explicit: under the Real Estate Agents Act 2008, licensees remain accountable for the services they provide regardless of what tools they use. It called out the need to check accuracy and completeness of AI-generated information, and to consider privacy and data protection risks. A fabricated war-zone image posted without review sits well outside that standard.
One agent’s mistake becomes every agency’s problem
Research from AUT published in Business Horizons and reported by Management Magazine in December 2025 introduces a concept that should concern every firm using AI for marketing: the spillover crisis.
Professor Dan Laufer, head of AUT’s School of Communication Studies, found that the public assumes all organisations in a category behave similarly. When Sports Illustrated used fake AI-generated author profiles, the scrutiny hit all of journalism. The same contagion logic applies here. When one Queenstown agent posts AI war-zone imagery to sell houses, every other agency using AI inherits a fraction of that reputational damage.
Laufer’s recommended defence: “a clear stance on Gen AI use and being willing to disclose whether the organisation uses tools involved in the crisis.” Quick, unequivocal statements are what distance organisations from the blast radius.
This is not the first time AI sloppiness has surfaced in New Zealand. In April 2025, ACT was embarrassed when AI-generated stock images, including a dog with human teeth, appeared in political advertising. In February 2026, at least 10 Facebook pages were found pushing AI-generated “news” to thousands of New Zealanders, including a fabricated video of a real landslide victim. That same month, New Zealand’s Privacy Commissioner co-signed a joint statement with over 50 international agencies expressing concern about AI-generated imagery depicting identifiable individuals without consent.
The pattern is consistent: the tools are cheap, fast, and available to anyone. The judgment required to use them responsibly is not keeping pace.
What a basic framework actually looks like
For a small business, preventing this kind of incident does not require a compliance department. It requires five things: a written policy on which AI tools can be used and for what, a mandatory human review before any AI-generated content goes public, clear accountability for who approves marketing output, a disclosure standard for clients, and a crisis response template for when something goes wrong.
None of those are expensive. All of them would have stopped the Queenstown post before it went live. The government has chosen not to regulate this space prescriptively, betting that businesses will fill the gap themselves. Every incident like this one tests that bet, and so far the results are not encouraging.
Sources
- Queenstown real estate agent apologises for AI war zone marketing post | NZ Herald (2026-04-25)
- Post ‘inappropriate, insensitive’ | Otago Daily Times (2026-04-25)
- AI Business Statistics New Zealand 2026 | OpenClaws (2026-04-15)
- Accountability key in Gen AI guidance issued by REA to real estate sector | REA (2024-11-12)
- AI crisis contagion new risk for NZ organisations | Management Magazine (2025-12-10)
- New Zealand’s AI Strategy: Investing with Confidence | Beehive (2025-07)
- AI-generated ‘news’ pages on social media misleading thousands of Kiwis | 1News (2026-02-09)
- Joint statement on AI Generated Imagery | Office of the Privacy Commissioner (2026-02-23)
- Use of AI imagery in ads not misleading, MP says | Otago Daily Times (2025-04-19)