The rise of Agentic AI: What it means for your team

The rise of Agentic AI: What it means for your team

Posted October 2, 2025

“There is no ethical use of AI.”

That was one of the more sobering comments we received in our recent AI survey of 864 business leaders and tech professionals across Australia and New Zealand.

And while not everyone shares that view, it reflects a growing tension in workplaces as AI evolves from a smart tool to something more autonomous.

We’re now entering the age of Agentic AI: systems that can make decisions, take actions, and respond to outcomes with minimal human prompting.

And with that shift, the stakes are changing.

It’s not just about use anymore, it’s about trust

Unlike traditional AI tools that assist with tasks like drafting content or analysing data, agentic systems act on behalf of humans, proactively initiating tasks, making decisions, and learning from feedback loops.

But are organisations ready for that level of autonomy?

When we asked survey participants about their current engagement with agentic AI:

  • Only 9.3% said they’re actively using it
  • 27.9% are “exploring use cases”
  • The majority (47.3%) are aware of the concept but not yet engaging with it
  • Nearly 9% admitted they weren’t familiar with the idea at all

The hesitancy makes sense. Because this isn’t just about capability, it’s about risk.

The top concern? Ethics

Of all the barriers we asked about, the most pressing were:

  • 60.1% cited “ethical or compliance risks”
  • 57.6% flagged “loss of human oversight or control”
  • 57.1% were concerned about “accuracy or hallucinations in autonomous actions”

“We are heavily regulated and hold large amounts of data,” one respondent noted. “We must be very careful with how any AI is implemented and ensure full compliance and transparency.”

Another put it more bluntly:

“Unethical use can cause confusion and poor decision making.”

These aren’t abstract fears, they reflect real-world scenarios that could impact brand trust, legal obligations, and people’s livelihoods.

Human-in-the-loop: From a nice-to-have to a non-negotiable

The further we move into agentic AI territory, the more critical governance becomes. The systems we build must be designed with ethical frameworks and clear escalation points, especially in sectors where harm, bias, or data misuse are real risks.

At the same time, we can’t let fear stop experimentation. Because the potential for agentic systems — whether it’s to automate workflows, reduce human error, or handle complexity at scale — is enormous.

It just needs to be done with clarity and caution, not hype.

The disconnect between interest and understanding

Even though nearly 40% of survey participants said they’re exploring or using agentic AI, we know from broader survey results that:

  • Only 4.9% of professionals feel their organisation is responding “extremely well” to AI change
  • Just 30.2% say their organisation have “dedicated teams working on AI initiatives”
  • A significant 41% say their organisation has “no AI strategy at all”

This kind of gap is where poor decisions and outcomes happen.

Without leadership clarity, robust frameworks, and upskilling, agentic AI becomes a risk multiplier rather than a value driver.

So, what now?

If your team is starting to explore or implement autonomous AI tools, the question isn’t just what can they do, it’s:

  • Who is accountable for their decisions?
  • Where does human oversight begin and end?
  • Are your people trained and supported to work alongside these systems?
  • And most importantly, is your business ready for the cultural shift they bring?

Because working with AI, and not just using it, demands new thinking about roles, responsibility, and risk.

Want to understand how others are navigating this shift? Explore the full report for free.