AI is a security risk and that’s why smart businesses are cautious

AI is a security risk and that’s why smart businesses are cautious

Posted October 15, 2025

It’s less about fearmongering and more about smart risk management.

In our recent AI survey, 46.2% of respondents named “security and compliance concerns” as the biggest barrier preventing wider AI use, and our experts say they’re absolutely right to be cautious.

“If I could rate that stat above 100%, I would. Security and compliance should be front of mind. Full stop,” says Jack Jorgensen, General Manager of Data, AI & Innovation at Avec, our project delivery arm.

AI is unlike any other tech shift we’ve seen. It’s fast-moving, largely unregulated, and capable of generating unexpected and sometimes dangerous outputs. And when sensitive company data is involved, that’s not a risk you can afford to take lightly.

The tools are already inside the business

If you haven’t formally adopted AI, your people probably already have.

  • 38.3% of respondents said their organisations currently have restrictions or policies limiting AI use.
  • But 28.9% said AI tools like ChatGPT are being used with minimal control or governance.
  • And 8.9% said there are no policies at all.

For data-heavy, regulated environments like financial services, insurance, or government, that’s a recipe for disaster.

“The usage of AI is prolific in every single organisation. It kind of just happened and now execs are scrambling to catch up,” says our recruitment expert JP Browne, Practice Manager from our Talent office in Auckland.

Real-world fails: AI gone rogue

We’re already seeing examples of AI being used recklessly:

  • A major NZ business uploaded their full CRM into ChatGPT to “get customer insights”
  • A software platform built entirely with AI-generated code suffered a data breach leaking 700,000 passports
  • Deepfakes and synthetic media are being weaponised, and legal systems haven’t caught up

“It’s such a fast-moving beast. You can make a critical mistake without even knowing you’ve made it,” says JP. The caution around AI isn’t about shutting adoption down but about saying yes in the safest way.

Why the AI risk is so unique

AI security isn’t just about infrastructure, but:

  • Data exposure: What is your staff putting into AI tools?
  • Model misuse: Can someone prompt the system to give access or misinformation?
  • Compliance blind spots: Are you meeting industry requirements?
  • Auditability: Can you trace how a decision was made by the system?

According to Jack, “We currently don’t know what the future holds in security breaches and attack vectors. The more people thinking about this, the better.”

What smart organisations are doing

Leading teams and businesses are:

  • Establishing clear AI policies and risk frameworks
  • Educating employees on what AI can and can’t do (and what to never input)
  • Limiting exposure by controlling which tools are sanctioned
  • Bringing data back on-premises in high-risk industries to reduce external risk
  • Running regular training quarterly or biannually to keep up with the rapidly developing technology

“Security posture, policy, and training. That’s your baseline. If you don’t have those three, don’t go near production-level AI,” says Jack.

Security is not the brake, it’s the steering wheel

Too many organisations treat security as something that slows innovation. When in reality, it’s the only thing that makes safe and scalable innovation possible.

“When you’re managing billions in funds, or customer identities, AI can’t be a black box. It needs to be understood, controlled and governed,” says JP.

So, if you’re exploring AI without a security posture, you’re not innovating. You’re gambling.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our recruitment team. Or ready to launch an AI or data project? Partner with Jack’s team at Avec.