The New Zealand government has introduced a new set of guidelines aimed at ensuring the responsible and transparent use of artificial intelligence (AI) within the public sector. Released on February 3, 2025, the Responsible AI Guidance for the Public Service outlines principles and best practices to help government agencies safely integrate AI while maintaining high standards of trust and accountability.
Framework for Responsible AI Use
The newly launched guidelines, developed by the Government Chief Digital Officer (GCDO), provide a structured approach to AI implementation across government operations. They emphasise five core principles:
Inclusive and Sustainable Development – Ensuring AI benefits all communities and does not widen social or economic disparities.
Human-Centred Values – Prioritising fairness, privacy, and ethical AI use.
Transparency and Explainability – Making AI-driven decisions understandable and accessible to the public.
Safety and Security – Preventing AI misuse, data breaches, and cybersecurity threats.
Accountability – Clearly assigning responsibility for AI-related decisions and ensuring human oversight.
The framework is non-binding but serves as a key reference for agencies to ensure AI deployment aligns with legal and ethical obligations. It also reinforces compliance with existing laws, including the Privacy Act, the Bill of Rights Act, and the Human Rights Act.
AI Adoption in Government Services
AI is already in use across various government agencies, with a recent cross-agency survey revealing that 37 out of 50 public sector organisations have implemented AI solutions in some capacity. According to the GCDO, there are at least 108 distinct AI use cases, primarily aimed at improving efficiency and service delivery.
Common AI applications in government include:
- Chatbots and Virtual Assistants – Helping users navigate public services.
- Automated Data Analysis – Summarising documents, analysing sentiment, and detecting fraud.
- Digital Detection and Imaging – AI-powered image analysis and retinal scanning for medical purposes.
- Transcription Services – Automating meeting notes and legal hearing summaries.
However, there are still challenges despite these advancements. The biggest barriers to safe AI adoption within agencies include skills gaps, lack of clear policies, and concerns over AI accuracy.
Managing Risks: Disinformation, Bias, and Ethical Oversight
The government has acknowledged that AI “hallucinations”—when AI generates false or misleading information—pose a significant risk. To address this, agencies are advised to use high-quality data and explicitly instruct AI models to state “I do not know” when uncertain.
Bias and discrimination in AI decision-making are also key concerns. The guidance emphasises the need for diverse teams working on AI development to minimise unfair outcomes. Additionally, an expert advisory panel and a community of practice are being established to support agencies in navigating AI-related challenges.
Future of AI Regulation in New Zealand
The Public Service AI Framework, which underpins these guidelines, is part of New Zealand’s developing National AI Strategy. The government is also working with the Ministry of Business, Innovation and Employment (MBIE) to create a similar framework for private sector AI use.
New Zealand’s approach aligns with international AI governance efforts, including the OECD AI Principles, which emphasise transparency, accountability, and ethical AI use. Officials believe these measures will position the country as a global leader in responsible AI governance.
“AI systems are evolving rapidly, and government policies, guidance, and use cases will continue to adapt alongside these advancements and public expectations,” said Judith Collins, Minister for Digitising Government.