The Ethical Dilemma of AI from a NZ perspective
- wisebizcounsel
- Jun 13
- 2 min read
Updated: Jun 13

AI is no longer a futuristic buzzword—it’s rapidly becoming core to how New Zealand businesses operate. From chatbots and analytics to recruitment tools and customer profiling, artificial intelligence is driving efficiency and transformation across sectors. But as adoption accelerates, so do the ethical dilemmas.
At its core, AI promises productivity. It analyses vast data sets, automates decisions, and predicts outcomes in ways no human team could match. But the technology isn’t neutral. It reflects the data it’s trained on—and when that data carries bias, the output can too.
International examples are telling. Hiring algorithms that overlook candidates based on gender. Credit scoring tools that favour certain postcodes. Marketing systems that profile consumers along questionable lines. These aren’t glitches—they’re structural issues. And the same risks apply here in Aotearoa, especially as Kiwi firms adopt offshore solutions without much local context or oversight.
The other challenge? Transparency. Many AI systems are black boxes—even their creators can’t fully explain how decisions are made. For businesses, that’s a potential compliance and reputational risk. What happens when a customer is denied a service, or a job applicant is filtered out, and no one can explain why?
Then there’s the workforce dilemma. AI is shifting the nature of work. Repetitive tasks are being automated, and white-collar roles aren’t immune. That’s an obvious threat to job stability if we don’t manage the transition well. For those of us in the business advisory space it is propelling us to operate higher in the value chain.
Anecdotally, in the current economic climate in the SME sector many businesses are just looking to survive so it’s less around driving AI technology and more around responding to what’s happening upstream in corporate NZ and internationally. But as the economy turns around and funds free up, there will be a solid focus on AI solutions, driven by fear of missing out. And decisions made in haste are seldom fit for purpose or future proof.
For those businesses that are adopting AI, are they thinking ahead, or just chasing short-term savings?
And perhaps most pressing issue: accountability. If an AI tool makes a flawed decision, who’s responsible? The vendor? The business? The developer? In a landscape with limited regulation, the downside risk is significant and the ability to hold a party to account virtually impossible.
New Zealand businesses have long stood out for integrity, trust, and transparency. The question now is whether we can bring those values into how we build and deploy AI. Doing so isn’t just ethically right—it’s commercially smart.
We’re at a tipping point. AI offers real benefits, but it also requires real responsibility.




Comments