The real fear behind AI at work isn’t job loss – it’s trust

The real fear behind AI at work isn’t job loss – it’s trust

Posted October 9, 2025

AI isn’t just changing how we work, but also how people feel about work.

Our latest AI survey revealed that while one in four professionals worry about job displacement, most concerns around AI go far deeper than that:

  • 60% are worried about ethical or compliance risks
  • 58% fear loss of human oversight
  • 57% are concerned about inaccuracy or hallucinations
  • 31% say integration is a challenge

What this tell us is people aren’t just worried about being replaced by AI, but many are concerned that the people running it don’t fully understand the risks.

Why workers are nervous

“You can’t bury your head in the sand. AI is affecting workflows and job design, and people are understandably unsure where they fit,” says JP Browne, Practice Manager at Talent Auckland.

Everywhere you look, there are bold statements about how AI will transform everything but in the real world, most employees are being left in the dark. Are they allowed to use ChatGPT? Are their roles changing? Will AI make their jobs harder, not easier?

The lack of communication is creating fear and can drive resistance among teams, potentially stalling AI adoption.

It’s bigger than just job loss

Jack Jorgensen, General Manager of Data, AI & Innovation at our IT project delivery arm, Avec, reassures, “We’re not seeing mass displacement. We’re seeing evolution. The risk is overstated but the change is real.”

It’s true that repetitive, manual, and rules-based work will go, but for most knowledge workers, the shift is about augmentation rather than replacement.

Still, that doesn’t mean people feel safe and JP shares that among workers, “The fear I’m seeing isn’t ‘I’ll lose my job’, it’s ‘I don’t understand this tech, and I don’t trust how it’s being used.’”

Ethics, oversight and deep uncertainty

One of the biggest risks leaders underestimate? The hidden ethics of AI.

  • Is your model biased?
  • Was your training data ethically sourced?
  • Can a customer tell when they’re dealing with a bot?
  • What happens when a mistake causes harm?

JP shares, “The ethics piece is huge. Especially in sectors like insurance.” And Jack echoes, “No one wants to end up on the front page because a bot denied someone’s surgery.”

Governments are slow to regulate, so this means ethical responsibility falls on individual organisations and most aren’t ready.

The more we automate, the more human oversight will matter and organisations will need people with critical thinking skills and not just the ability to prompt engineer.

“There was a company that deployed an AI-generated software stack. It looked great until it leaked 700,000 passports. That’s not innovation, that’s negligence,” shares Jack. Trust, transparency, and responsibility are necessary considerations for your AI strategy.

What leaders can do now

  • Involve your people early in decisions around tooling, automation, and processes
  • Invest in ethics and risk literacy, not just tech skills
  • Ensure humans are in the loop especially where decisions impact people’s lives

According to JP, “You don’t have to be a guru. But you can’t bury your head in the sand. AI is different from anything we’ve experienced before.”

If your team doesn’t trust how AI is being used, they’ll resist it, avoid it, or worse, they’ll use it without telling you. For successful AI implementation you need to build buy-in, not fear.

Find out what else our AI survey revealed by accessing the full report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team.

Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.