The hidden risks of AI: Why ethics can’t be an afterthought
The hidden risks of AI: Why ethics can’t be an afterthought
When most leaders talk about AI, the conversation is about productivity, cost savings, and innovation. But there’s a blind spot that can’t be ignored: ethics.
As JP Browne, Practice Manager from our Talent Auckland, who’s worked extensively in the insurance sector, warns:
“Nobody wants to end up on the front page because an AI system made the wrong call on a claim. That’s the kind of reputational damage you can’t come back from.”
Yet in many industries, ownership of AI ethics is missing. Governments are slow to legislate, and individual organisations are left to figure it out for themselves. The result? Huge risks hiding in plain sight.
The illusion of control
AI doesn’t just introduce new capabilities; it introduces new vulnerabilities. Jack Jorgensen, General Manager of Data, AI & Innovation at our project delivery arm Avec, highlights one recent example:
“A company built an entire software stack using AI-generated code. When their system was breached, 800,000 passports were leaked. That’s not innovation, that’s negligence.”
The rush to cut costs or speed up delivery often skips over the basics: security audits, human oversight, and clear accountability. Without these safeguards, AI can create more problems than it solves.
Ethics is more than compliance
Many organisations treat AI risks as a compliance issue: tick the right boxes and you’re safe. But as JP points out, ethics goes much deeper.
“In finance and insurance, compliance is the easy part. The harder part is asking whether it’s ethical to let AI decide someone’s mortgage, surgery, or claim outcome. Nobody wants to trust their future to a black box.”
The ethical stakes are high. And unlike sweatshops or environmental practices, consumers can’t easily “see” how companies are using AI. That makes transparency essential.
Jack even suggests that organisations should disclose their AI use openly:
“Imagine a badge on a company’s website saying how much of their service is powered by AI. That level of transparency builds trust and gives consumers real choice.”
The risks you’re probably missing
So, what are the hidden risks? Our recent AI survey surfaced three that too many leaders underestimate:
- Security breaches. AI-generated code and automated systems can introduce new vulnerabilities, often unnoticed until it’s too late.
- Bias and fairness. Algorithms trained on flawed data can reinforce discrimination in all process including hiring, lending, or claims processing.
- Reputational damage. Whether it’s unfair exam results (like the UK’s failed GCSE grading algorithm) or customer data leaks, public trust can vanish overnight.
As Jack notes, “The hype around AI can drown out the noise. But the reality is, these risks are already here and they’re escalating.”
Why leadership matters
The absence of clear ownership is one of the biggest barriers to managing AI risk. In many organisations, executives are excited about AI but pass the responsibility to IT. That’s not enough.
AI ethics requires leadership at the top. It means asking:
- Who is accountable for AI decision-making?
- How transparent are we willing to be with customers?
- What safeguards are we putting in place to avoid harm?
Without executive buy-in, ethics gets sidelined until a crisis forces the issue.
From risk to responsibility
Ethics isn’t about slowing down innovation. It’s about ensuring innovation doesn’t destroy trust. Businesses that lead on AI ethics will stand out not just for their technology, but for their credibility.
JP sums it up well:
“AI is in everything now, from your phone updates to the way companies deliver services. If you don’t set ethical guardrails, you’re leaving your organisation and your customers exposed.”
AI ethics isn’t optional. The risks are real, the costs are high, and the responsibility is yours. Organisations that embrace transparency and accountability now will be the ones consumers trust tomorrow.
Learn more about what else professionals are concerned about around AI in the workplace in our latest report.
If you’re looking to start a new AI or data project, get in touch with Jack’s team to ensure it’s built on a secure and ethical foundation.