A general election will take place later this year, and like in many democracies, political parties in New Zealand are beginning to experiment with AI as part of their campaign strategies.
As AI becomes more widely available, concerns about misinformation have been raised repeatedly. However, its rapid adoption is also highlighting how existing rules can struggle to adapt to new technologies, raising the question of whether regulation can realistically keep pace without stifling innovation and political expression.
How does politics happen in the age of artificial intelligence?
Across social media platforms such as Facebook, there have been reports of fabricated content, including AI-generated videos that appear to show political figures in staged or misleading scenarios.
At the same time, political parties are also experimenting with AI in campaigning. Some have used AI-generated imagery and content in political messaging, including satirical or critical depictions of opponents. While such tactics have attracted attention, they also reflect a broader global trend where digital tools are increasingly part of normal campaign communication.
Views differ on how far AI should be used in elections. Some see it as a natural extension of modern campaigning and free expression, while others argue it raises fairness concerns. In practice, however, political competition has always involved persuasive messaging, and new tools simply change the format rather than the underlying reality.
Political advertising has long included sharp criticism and negative campaigning. What has changed is the cost and speed of production. AI now allows individuals and organisations to create visual and written content quickly and cheaply, lowering barriers to entry for political communication.
This democratisation of content creation also raises important questions. Third-party groups, activists, and even foreign actors could potentially use these tools during election periods. However, it is also worth noting that attempts to influence public opinion are not new, and voters today have access to more information and fact-checking tools than ever before.
AI may also influence how voters interpret political messaging. A study, published in January 2026 suggests that people can be influenced by synthetic or manipulated imagery, even when aware of its artificial nature. This highlights the importance of media literacy and critical thinking in a digital environment where content is abundant and not always authentic.
Where NZ election law stands
New Zealand already has a framework governing election campaigning. Existing laws define election advertisements broadly and require promoter statements, spending limits, and consent rules for material referencing candidates or parties.
These rules were designed in a pre-AI era, and while they still apply, they were not built with modern generative technologies in mind. Importantly, the law focuses more on transparency and process than on restricting content itself, which reflects a long-standing commitment to free political communication.
There are limited provisions that address false statements, particularly close to election day, and general prohibitions on undue influence. However, these provisions were not written with digital manipulation technologies in mind and have seen limited enforcement in recent decades.
Adapting without overcorrecting
There is ongoing debate about whether new rules are necessary as AI continues to evolve. Some argue for measures such as disclosure requirements for AI-generated content in political advertising, while others caution that additional regulation could create compliance burdens and risk limiting legitimate political speech.
Internationally, different approaches are emerging. The European Union and several US jurisdictions have introduced specific rules around synthetic media in elections, while countries such as Australia are focusing on misinformation related to electoral processes.
New Zealand will need to consider its own path carefully. Any response should balance transparency and fairness with the need to preserve free expression and avoid unnecessary regulatory expansion that could have unintended consequences.