Critical thinking is the new superpower in the age of AI

Critical thinking is the new superpower in the age of AI

Posted February 6, 2026

AI is changing the way we work faster than most organisations can adapt. Tools like ChatGPT and Microsoft Copilot are already baked into everyday workflows. But here’s the catch: the more we rely on AI, the more critical thinking becomes the skill that separates the leaders from the laggards.

As Jack Jorgensen, General Manager of Data, AI & Innovation at our IT delivery arm Avec, explains:

“It’s easy to get caught up in the speed AI gives you. But if you don’t understand the problem you’re solving or what happens when the system breaks, you’re setting yourself up for failure.”

AI is powerful, but it’s not infallible. And in a world where outputs can look convincing but still be completely wrong, critical thinking is the safeguard every professional needs.

The productivity trap

Many organisations rush into AI adoption with a focus on speed. How quickly can we roll this out? How much time can we save? That’s short-term thinking.

“Velocity is more important than speed,” Jack notes. “It’s not about being first to market with a shiny AI tool. It’s about building solid foundations so you can scale safely and sustainably.”

Critical thinking shifts the focus from how fast to how valuable. It asks: is this solving the right problem? Is it secure? Is it ethical? Does it actually add value for the business or customer?

Spotting the flaws

For those of us who use AI daily, spotting the flaws can feel obvious; six fingers on a generated image, or a chatbot confidently inventing a source. But as JP Browne, Practice Manager from Talent Auckland points out, it’s not always that simple:

“It’s getting harder and harder to tell what’s AI-generated. And that’s a huge risk if people stop questioning what they see.”

Critical thinkers don’t just accept outputs at face value. They test, validate, and challenge. That mindset is what prevents AI from becoming a liability instead of an asset.

Why this matters for every role

Critical thinking isn’t just for data scientists or IT leaders. It’s for recruiters screening AI-written CVs, finance teams reviewing AI-generated forecasts, and executives reading AI-drafted reports.

JP has already seen how over-reliance on automation can backfire in recruitment:

“Candidates are using AI to craft brilliant cover letters, but the CV doesn’t match the job. If you don’t apply a human lens, you’ll make bad hiring decisions.”

In other words, AI can help filter and accelerate, but without human judgment the wrong calls get made.

Building critical thinking in the AI era

So how do you make critical thinking a core skill in your team? Here are three steps:

  1. Teach healthy scepticism. Encourage employees to question AI outputs, not just accept them.
  2. Build human-in-the-loop processes. Always pair AI automation with human oversight where decisions impact people, money, or reputation.
  3. Normalise checking sources. Whether it’s data, content, or code, make verification a cultural habit.

As Jack says:

“AI should be an enabler, not the thing doing all the work. Human judgment is what makes AI outputs valuable.”

The edge that can never be automated

The irony is that in a world obsessed with automation, the most valuable skills are the ones that can’t be automated. Curiosity. Scepticism. Judgment. Context.

Critical thinking isn’t just another “soft skill.” It’s the hardest edge businesses have in protecting themselves against AI risks and in making sure AI delivers genuine competitive advantage.

In the age of AI, the real superpower isn’t knowing how to prompt a chatbot. It’s knowing how to think critically about what it gives you. The leaders who sharpen that skill and build it across their teams will be the ones who thrive.

Find out how other organisations are navigating the changing AI landscape in our most recent AI report.