April 8, 2026

Simon Power’s face is now fraud infrastructure for stock scammers

Close-up of a smartphone displaying a fraud alert notification on a wooden surface.

Your brand is now fraud infrastructure

Fisher Funds chief executive Simon Power, a former National MP, recently discovered his face was being used in paid social media ads promoting fake stock tips. The ads carried the logos of Sharesies and BNZ for good measure. A Fisher Funds spokesperson confirmed the ads were fake, using “doctored images of trusted New Zealand investment figures to give scams a veneer of credibility.”

Power is not alone. Sir John Key, Mike Hosking, and journalist Paula Penfold have all had their identities hijacked for the same purpose. The Financial Markets Authority’s Executive Director of Licensing and Conduct Supervision, Clare Bolingford, has warned that impersonating business leaders and commentators is now a standard feature of investment fraud, including “deepfake videos that look realistic.”

This is no longer a consumer protection story. It is a corporate exposure problem that most New Zealand businesses have not priced in.

The scam machine runs like a startup

The operations behind these scams are structured, multi-stage, and increasingly professional. Research from BrokerListings.com traces the victim journey from a paid social media ad featuring a deepfaked public figure, through a WhatsApp group, to a convincing fake brokerage platform showing manipulated dashboards with strong returns. When victims try to withdraw, they are told fees must be paid first. As Bolingford warned: “Even if these fees are paid, no money is received.”

Christian Harris, author of the BrokerListings.com report, frames it bluntly: “Celebrity investment scams are no longer simple email frauds – they are structured, multi-stage operations designed to mimic regulated investment platforms.”

AI has collapsed the cost of running these operations. World Economic Forum analysis citing Anthropic research found a single AI-assisted threat actor can now automate 80-90% of an attack with only sporadic human intervention. The grammar-check-and-gut-feel defences most people rely on are useless against content that is grammatically perfect and calibrated to appear plausible.

The numbers are ugly

Netsafe’s State of Scams in New Zealand 2025 report found the average New Zealander now faces 152 scam encounters annually. Almost three-quarters of adults experienced a scam in the past year, each victimised an average of 2.3 times. Roughly a quarter lost money.

For businesses, the hit rate is worse. A Mastercard report cited by the FMA found 47% of New Zealand firms targeted by deepfake scams admitted they fell for them. The impersonated individuals include employees, CEOs, board members, and even law enforcement officers.

And the evolution is not slowing. F-Secure analyst Megan Squire has identified AI influencer personas that build parasocial trust over weeks before directing audiences toward fraud. As Squire puts it: “AI influencers don’t just blur the line between advertising and identity; they can give scammers a reusable front for manufacturing trust at speed and scale.”

Platforms profit while regulators watch

The uncomfortable truth is that the advertising infrastructure powering these scams runs on major social platforms, and those platforms are making money from it. Reuters has reported that Meta internally projected about US$16 billion of its 2025 revenue could come from ads linked to scams or banned goods. The incentive to police ad quality aggressively is, to put it mildly, limited.

The FMA’s tools are largely reactive. Its March 2025 briefing to the incoming Minister identifies scams as a key policy programme being developed with MBIE, but the regulator cannot compel Meta or Google to remove scam ads at speed. It has no direct jurisdiction over the platforms hosting the fraud infrastructure. Policy development moves in months and years. Scam operations scale in days.

The enforcement gap shows in the data. Seven in ten New Zealanders have reported scams, but 33% say no action was taken and over 40% of victims could not recover any money.

The FMA had already warned in late 2023 about scammers forging formal product disclosure statements to impersonate legitimate investment products. That vector has since expanded to include deepfake video, AI personas, and fabricated news articles. Each layer adds credibility.

What this means for every firm with a brand worth stealing

The question for New Zealand business owners is not whether they will be scammed. It is whether their brand, their executives’ faces, or their company logos will be used as props in someone else’s fraud operation, and what they plan to do about it.

Firms should assume their public-facing leaders are targets. Internal deepfake awareness training is no longer optional. Payment verification processes need to account for the possibility that the person on the video call is not real. And industry bodies should be putting serious pressure on platform companies to implement meaningful ad verification, not just issuing warnings after the damage is done.

Until the platforms face real financial consequences for profiting from fraud, the economics of this problem point in one direction only: more scams, better scams, faster.

Sources

Subscribe for weekly news

Subscribe For Weekly News

* indicates required