AI is quietly reshaping workforce planning – here’s what’s changing first

AI is quietly reshaping workforce planning – here’s what’s changing first

Posted October 20, 2025

Although it isn’t yet a tidal wave, the tide is turning on workforce planning.

In our latest AI survey with 864 business leaders and tech professionals, when asked whether AI was impacting workforce planning over the next 12-18 months:

  • 12.1% said they’re already using AI to evolve roles or reduce manual work
  • 23.3% said they’re exploring how AI may shift the skills they hire for
  • 32.5% said it’s on their radar, but not yet a focus
  • 22.9% said AI isn’t impacting their workforce planning at all

With more than a third of organisations actively exploring or implementing changes due to AI, it’s worth exploring what these changes are for both employers and candidates.

The roles rising first

According to JP Browne, Talent Practice Manager and recruitment expert in the Auckland market, “We’re not seeing a rush to hire ‘AI engineers’, but we are seeing demand for systems engineers, data engineers, and data analysts, because poor data breaks AI.”

Before deploying AI at scale within your business, you need solid data foundations. This means hiring for:

  • Data quality and governance
  • System integration
  • Infrastructure build-out
  • Security and compliance enablement

In other words, it’s those behind-the-scenes roles that make AI functional, reliable, and safe.

Recruitment reality check

AI is also changing recruitment itself, both in how candidates present themselves in the application process and how employers assess talent.

“Some candidates are using ChatGPT to craft flawless cover letters, but the actual CV doesn’t match the role. So it’s forcing recruiters to dig deeper and reintroduce more human screening,” observed JP.

We’re seeing:

  • More AI-assisted job applications
  • A shift away from simple keyword-matching tools
  • Greater emphasis on human-in-the-loop hiring decisions

The personal productivity play

For some employees, they’re already using AI to improve their own output without waiting for top-driven organisational change.

From automating reporting to building project estimates, self-taught AI adoption is becoming a competitive edge for individuals. However, it creates uneven capability across teams, and potential risk if it’s unsanctioned.

“You can’t wait on your organisation to set the rules. You need to learn how to use AI yourself, but use it responsibly,” says JP.

Why leaders need to act now

If your workforce planning hasn’t factored in AI, you risk falling behind in:

  • Skill readiness in both hiring and internal development
  • Employee engagement as workers expect modern tools their industry peers are using
  • Efficiency gains where your competitors will find them first
  • Risk management as unsanctioned AI use is already happening

JP emphasises, “AI is here, in your phones, in your search engines, in your workflows. Ignoring it doesn’t make it go away, it just leaves you unprepared.”

The shift right now feels subtle, until it’s sudden. The organisations making small, strategic moves today will be the ones ready for the bigger shifts tomorrow.

If you need help building AI-ready hiring strategies, get in touch with our team.

AI is a security risk and that’s why smart businesses are cautious

AI is a security risk and that’s why smart businesses are cautious

Posted October 15, 2025

It’s less about fearmongering and more about smart risk management.

In our recent AI survey, 46.2% of respondents named “security and compliance concerns” as the biggest barrier preventing wider AI use, and our experts say they’re absolutely right to be cautious.

“If I could rate that stat above 100%, I would. Security and compliance should be front of mind. Full stop,” says Jack Jorgensen, General Manager of Data, AI & Innovation at Avec, our project delivery arm.

AI is unlike any other tech shift we’ve seen. It’s fast-moving, largely unregulated, and capable of generating unexpected and sometimes dangerous outputs. And when sensitive company data is involved, that’s not a risk you can afford to take lightly.

The tools are already inside the business

If you haven’t formally adopted AI, your people probably already have.

  • 38.3% of respondents said their organisations currently have restrictions or policies limiting AI use.
  • But 28.9% said AI tools like ChatGPT are being used with minimal control or governance.
  • And 8.9% said there are no policies at all.

For data-heavy, regulated environments like financial services, insurance, or government, that’s a recipe for disaster.

“The usage of AI is prolific in every single organisation. It kind of just happened and now execs are scrambling to catch up,” says our recruitment expert JP Browne, Practice Manager from our Talent office in Auckland.

Real-world fails: AI gone rogue

We’re already seeing examples of AI being used recklessly:

  • A major NZ business uploaded their full CRM into ChatGPT to “get customer insights”
  • A software platform built entirely with AI-generated code suffered a data breach leaking 700,000 passports
  • Deepfakes and synthetic media are being weaponised, and legal systems haven’t caught up

“It’s such a fast-moving beast. You can make a critical mistake without even knowing you’ve made it,” says JP. The caution around AI isn’t about shutting adoption down but about saying yes in the safest way.

Why the AI risk is so unique

AI security isn’t just about infrastructure, but:

  • Data exposure: What is your staff putting into AI tools?
  • Model misuse: Can someone prompt the system to give access or misinformation?
  • Compliance blind spots: Are you meeting industry requirements?
  • Auditability: Can you trace how a decision was made by the system?

According to Jack, “We currently don’t know what the future holds in security breaches and attack vectors. The more people thinking about this, the better.”

What smart organisations are doing

Leading teams and businesses are:

  • Establishing clear AI policies and risk frameworks
  • Educating employees on what AI can and can’t do (and what to never input)
  • Limiting exposure by controlling which tools are sanctioned
  • Bringing data back on-premises in high-risk industries to reduce external risk
  • Running regular training quarterly or biannually to keep up with the rapidly developing technology

“Security posture, policy, and training. That’s your baseline. If you don’t have those three, don’t go near production-level AI,” says Jack.

Security is not the brake, it’s the steering wheel

Too many organisations treat security as something that slows innovation. When in reality, it’s the only thing that makes safe and scalable innovation possible.

“When you’re managing billions in funds, or customer identities, AI can’t be a black box. It needs to be understood, controlled and governed,” says JP.

So, if you’re exploring AI without a security posture, you’re not innovating. You’re gambling.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our recruitment team. Or ready to launch an AI or data project? Partner with Jack’s team at Avec.

The real fear behind AI at work isn’t job loss – it’s trust

The real fear behind AI at work isn’t job loss – it’s trust

Posted October 9, 2025

AI isn’t just changing how we work, but also how people feel about work.

Our latest AI survey revealed that while one in four professionals worry about job displacement, most concerns around AI go far deeper than that:

  • 60% are worried about ethical or compliance risks
  • 58% fear loss of human oversight
  • 57% are concerned about inaccuracy or hallucinations
  • 31% say integration is a challenge

What this tell us is people aren’t just worried about being replaced by AI, but many are concerned that the people running it don’t fully understand the risks.

Why workers are nervous

“You can’t bury your head in the sand. AI is affecting workflows and job design, and people are understandably unsure where they fit,” says JP Browne, Practice Manager at Talent Auckland.

Everywhere you look, there are bold statements about how AI will transform everything but in the real world, most employees are being left in the dark. Are they allowed to use ChatGPT? Are their roles changing? Will AI make their jobs harder, not easier?

The lack of communication is creating fear and can drive resistance among teams, potentially stalling AI adoption.

It’s bigger than just job loss

Jack Jorgensen, General Manager of Data, AI & Innovation at our IT project delivery arm, Avec, reassures, “We’re not seeing mass displacement. We’re seeing evolution. The risk is overstated but the change is real.”

It’s true that repetitive, manual, and rules-based work will go, but for most knowledge workers, the shift is about augmentation rather than replacement.

Still, that doesn’t mean people feel safe and JP shares that among workers, “The fear I’m seeing isn’t ‘I’ll lose my job’, it’s ‘I don’t understand this tech, and I don’t trust how it’s being used.’”

Ethics, oversight and deep uncertainty

One of the biggest risks leaders underestimate? The hidden ethics of AI.

  • Is your model biased?
  • Was your training data ethically sourced?
  • Can a customer tell when they’re dealing with a bot?
  • What happens when a mistake causes harm?

JP shares, “The ethics piece is huge. Especially in sectors like insurance.” And Jack echoes, “No one wants to end up on the front page because a bot denied someone’s surgery.”

Governments are slow to regulate, so this means ethical responsibility falls on individual organisations and most aren’t ready.

The more we automate, the more human oversight will matter and organisations will need people with critical thinking skills and not just the ability to prompt engineer.

“There was a company that deployed an AI-generated software stack. It looked great until it leaked 700,000 passports. That’s not innovation, that’s negligence,” shares Jack. Trust, transparency, and responsibility are necessary considerations for your AI strategy.

What leaders can do now

  • Involve your people early in decisions around tooling, automation, and processes
  • Invest in ethics and risk literacy, not just tech skills
  • Ensure humans are in the loop especially where decisions impact people’s lives

According to JP, “You don’t have to be a guru. But you can’t bury your head in the sand. AI is different from anything we’ve experienced before.”

If your team doesn’t trust how AI is being used, they’ll resist it, avoid it, or worse, they’ll use it without telling you. For successful AI implementation you need to build buy-in, not fear.

Find out what else our AI survey revealed by accessing the full report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team.

Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.

Ash London on risk, reinvention, and the quiet joy of Lego at her feet

Ash London on risk, reinvention, and the quiet joy of Lego at her feet

Posted October 7, 2025

Ash London has spent much of her adult life with a microphone in front of her. She’s been a TV host, a radio presenter, and more recently, an author. But when you ask her who she really is, she doesn’t reach for her extensive resume. She describes herself on the couch, reading a book while her son builds Lego at her feet.

It’s an answer that might surprise anyone who has only known Ash as the professional she is—a vibrant and confident broadcaster who made her name in the music industry. But, to her, that distinction is important. The spotlight is her work, while the quiet is who she really is. We sat down with Ash on our latest podcast episode to chat about the person behind the many job titles.

Owning the title

When Ash published her debut novel ‘Love on the Air’, she found herself hesitant to claim the word “author”. “I’m just a person who wrote a book,” she says laughing, “But there’s real power in accepting that title.”

This tension between humility and ownership is something many women would recognise and Ash is quick to point out that if one of her girlfriends admitted to feeling unworthy of their achievements, she’d rush to remind them of their brilliance. Yet, like many do, she struggles to extend the same grace inward.

For her, writing became a way of both proving herself wrong and expanding her identity. After more than a decade of radio, it was an entirely different rhythm. Radio is instant gratification: hours of live content each day, with feedback arriving in real time. Whereas writing a book demanded patience and a willingness to sit with uncertainty.

“I didn’t think I had the discipline,” she admits. “But I wanted to prove to myself that I could.”

Perspective in the pause

Ash admits the novel might not have happened without the forced and chosen pause of her maternity leave that coincided with the Covid lockdowns; giving her the first real time and space to reflect on the last decade.

“I had put my whole identity into being a broadcaster,” she says. “Then suddenly, I was at home, and it was quiet. No interviews, no deadlines. I started realising, wow, that was actually a really cool thing I did.”

It’s that reflection on both her career and her identity outside of work that gave Ash the push she needed to write. And when her son proudly pointed out his name in the book’s acknowledgements, she felt the depth of what she had created. “It’s a legacy,” she says. “Something he’ll always know I did.”

The introvert behind the mic

Ash is the first to admit she’s not naturally an extrovert. “People assume I am because of my work, but I recharge at home. The truest version of myself is just reading on the couch while Buddy plays.”

Behind the stage presence, Ash is someone who finds peace in stillness, who carefully guards her energy, and who has learned to protect her sense of calm, especially while juggling the demands of breakfast radio, motherhood, and writing.

Doing the inner work

What Ash is truly passionate about is the less visible work: therapy, spirituality, and self-reflection, speaking passionately about the value of inner growth. During Covid, she began writing and voicing meditations for her radio audience and the feedback was overwhelming, receiving hundreds of messages a day.

“People are yearning for that deeper connection,” she says. “And when I do that kind of work, it’s what I get the most feedback from.”

Lessons from risk

If Ash has a pattern, it’s refusing to let fear of regret dictate her choices. She’s changed careers, moved countries, sold her house, and taken the leap into becoming an author. “I don’t want to look back and wonder” she says.

“We romanticise the idea of what would have happened if we’d chosen differently. But the truth is, you don’t know. You just make the best decision you can with the information you have, and you keep going.”

For Ash, gratitude is the thread that runs through it all; the risks, the lessons, and the ability to reinvent herself when the moment calls for it.

Not just an author, TV host, or radio personality…

If you ask Ash what she wants to be known for, the answer isn’t fame, ratings, or bestsellers. It’s two-fold: being a great mum and helping others better understand themselves. “So many of us never really go on that journey,” she says. “If I can be part of helping someone figure out why they are the way they are, that would mean everything.”

For someone who has built a career out of connection, whether it’s through TV, radio, or her novel, each chapter of her story points to the same belief in the power of curiosity, courage, and gratitude. Ash London’s real legacy is the way she’s chosen to live: leaning into risk, redefining success on her own terms, and reminding us that who we are matters more than what we do.

Want to hear more of Ash’s story? Watch the full podcast episode on our YouTube channel.

The rise of Agentic AI: What it means for your team

The rise of Agentic AI: What it means for your team

Posted October 2, 2025

“There is no ethical use of AI.”

That was one of the more sobering comments we received in our recent AI survey of 864 business leaders and tech professionals across Australia and New Zealand.

And while not everyone shares that view, it reflects a growing tension in workplaces as AI evolves from a smart tool to something more autonomous.

We’re now entering the age of Agentic AI: systems that can make decisions, take actions, and respond to outcomes with minimal human prompting.

And with that shift, the stakes are changing.

It’s not just about use anymore, it’s about trust

Unlike traditional AI tools that assist with tasks like drafting content or analysing data, agentic systems act on behalf of humans, proactively initiating tasks, making decisions, and learning from feedback loops.

But are organisations ready for that level of autonomy?

When we asked survey participants about their current engagement with agentic AI:

  • Only 9.3% said they’re actively using it
  • 27.9% are “exploring use cases”
  • The majority (47.3%) are aware of the concept but not yet engaging with it
  • Nearly 9% admitted they weren’t familiar with the idea at all

The hesitancy makes sense. Because this isn’t just about capability, it’s about risk.

The top concern? Ethics

Of all the barriers we asked about, the most pressing were:

  • 60.1% cited “ethical or compliance risks”
  • 57.6% flagged “loss of human oversight or control”
  • 57.1% were concerned about “accuracy or hallucinations in autonomous actions”

“We are heavily regulated and hold large amounts of data,” one respondent noted. “We must be very careful with how any AI is implemented and ensure full compliance and transparency.”

Another put it more bluntly:

“Unethical use can cause confusion and poor decision making.”

These aren’t abstract fears, they reflect real-world scenarios that could impact brand trust, legal obligations, and people’s livelihoods.

Human-in-the-loop: From a nice-to-have to a non-negotiable

The further we move into agentic AI territory, the more critical governance becomes. The systems we build must be designed with ethical frameworks and clear escalation points, especially in sectors where harm, bias, or data misuse are real risks.

At the same time, we can’t let fear stop experimentation. Because the potential for agentic systems — whether it’s to automate workflows, reduce human error, or handle complexity at scale — is enormous.

It just needs to be done with clarity and caution, not hype.

The disconnect between interest and understanding

Even though nearly 40% of survey participants said they’re exploring or using agentic AI, we know from broader survey results that:

  • Only 4.9% of professionals feel their organisation is responding “extremely well” to AI change
  • Just 30.2% say their organisation have “dedicated teams working on AI initiatives”
  • A significant 41% say their organisation has “no AI strategy at all”

This kind of gap is where poor decisions and outcomes happen.

Without leadership clarity, robust frameworks, and upskilling, agentic AI becomes a risk multiplier rather than a value driver.

So, what now?

If your team is starting to explore or implement autonomous AI tools, the question isn’t just what can they do, it’s:

  • Who is accountable for their decisions?
  • Where does human oversight begin and end?
  • Are your people trained and supported to work alongside these systems?
  • And most importantly, is your business ready for the cultural shift they bring?

Because working with AI, and not just using it, demands new thinking about roles, responsibility, and risk.

Want to understand how others are navigating this shift? Explore the full report for free.

Executive FOMO is driving AI but no-one’s owning strategy

Executive FOMO is driving AI but no-one’s owning strategy

Posted September 22, 2025

There’s one thing most exec teams agree on right now: we need to do something with AI. However, what they can’t seem to agree on is who’s responsible.

Our recent AI survey with 864 business leaders and technology professionals revealed that the top blockers to AI adoption aren’t technical, but strategic:

  • 41% cited lack of a clear AI strategy
  • 41% pointed to unclear goals
  • 37% said limited budget
  • 34% said there’s unclear ownership

So, while the boardroom is buzzing about transformation, most organisations are stuck in a strange limbo: pressure to innovate without a plan to execute.

Why AI adoption is stalling

JP Browne, Practice Lead at Talent Auckland shares, “For the first time ever, I’ve got IT leaders saying: ‘Yes, we want to use this but we literally can’t implement what you need until we fix security and infrastructure.’”

For many, AI is being driven top-down with enthusiasm but falling straight into the laps of overwhelmed IT teams who weren’t prepared to own it.

Jack Jorgensen, General Manager of Data, AI & Innovation at our consultancy arm Avec explains there is a fundamental shift. “IT departments aren’t driving AI; they’re just putting up guardrails. But because execs don’t know who should own it, they’re lumping it in tech’s lap.”

Traditionally, IT has been a business enabler, not the strategic driver, and now they’re caught between enthusiasm and risk mitigation.

Strategy vs shiny objects

Executives want to make a move before they miss the moment but urgency without clarity creates chaos, and budget doesn’t get approved when goals are vague and ROI is fuzzy. As JP reiterates, “The money’s there. The buy-in is there. But no one has defined the problem they’re solving.”

The result? Strategy sessions that go nowhere, capability gaps that stay unfilled, and a flood of shadow AI usage from employees trying to work it out themselves.

Why no-one wants to own it

AI touches everything from operations to data, people, risk, compliance, customer experience, the list goes on… Which is exactly why no one wants to fully own it.

Jack explains, “It’s hard at an executive level to understand how AI applies across every function. And very few people have the visibility to own it end-to-end.”

Without a central owner or clear cross-functional strategy, the AI agenda gets stuck between departments, paused at the sign of risk, or shoved into a tech proof-of-concept that never scales.

So, what does good look like in 2025?

Leading organisations are beginning to:

  • Appoint AI leads or cross-functional innovation squads
  • Build a lightweight AI framework (use cases, data posture, ethics)
  • Define clear roles for IT, data, security and HR
  • Pilot use cases with measurable outcomes, not hype
  • Invest in education to align teams on what AI is (and isn’t)

“Having no AI strategy is short-sighted. Even if all you’re doing is prepping your security posture, that’s still a strategy,” reassures Jack.

Start small, but smart. You don’t need a 40-page strategy doc to get started but you do need ownership, clarity and intent. Otherwise, you could end up with a dozen AI tools, no business outcomes, and a team wondering why it still takes three weeks to get a report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team.

Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.

Should you go to another interview after accepting an offer?

Should you go to another interview after accepting an offer?

Posted September 17, 2025

If you’ve already accepted, should you go to another final interview?

It’s a situation many candidates find themselves in; you’ve verbally or even formally accepted a job offer, but another final interview opportunity lands on your lap. Do you honour the commitment you’ve already made, or explore what could be a better fit? It’s a tricky balance between integrity, opportunity, and self-interest. To unpack the dilemma, we asked two of our recruitment experts, Chris Hossell, Senior Consultant in Wellington, and Shweta Chopra, Practice Lead in Auckland, New Zealand, to each pick a side, present their perspectives then see how it unfolds.

The case against: Your word is your bond

Chris Hossell presented the case that once an offer has been accepted, you should not continue interviewing elsewhere.

He argues that integrity and professionalism are critical in these moments: “Once you sign a contract, you’ve given your word. To pull out after that leaves a sour taste for the employer and damages your reputation. Hiring managers don’t forget situations like this, and if you come across them again in the future, that decision will follow you. Even before you’ve signed, being transparent is key. If you do have another interview lined up, be open about it, it’s better to ask for an extension to make an informed decision than to backtrack after committing.”

Chris also points out the ripple effect beyond just one job. “The market is smaller than people think. Word gets around, especially in tight-knit industries. Backing out of an offer after acceptance doesn’t just affect your relationship with one employee, it could also affect how future employers or recruiters view your reliability. A short-term gain might not be worth the long-term damage.”

The case for: The need to protect yourself

Shweta Chopra presented the alternative view. She argues that candidates should keep their options open until the moment a contract is signed.

“Verbal offers don’t carry the same weight as signed agreements, and candidates need to protect themselves. Things can still fall through, and a verbal acceptance is a grey area. Until you’ve put pen to paper, you should have the choice to attend other interviews, especially if it could lead to a role that’s a stronger fit. What matters most is honesty: communicate with your recruiter and the employer so there are no surprises. Once you’ve signed, though, I agree that you need to stop. That’s the point at which commitment really kicks in.”

She also notes that candidates should put themselves first, because companies always will. “At the end of the day, organisations will do what’s best for them. They can withdraw offers at the last minute or restructure a role after you’ve joined. So, I don’t think candidates should feel guilty about exploring every option until they’ve locked something in formally. It’s about being pragmatic while still acting respectfully.”

Both Chris and Shweta agree that the challenge often lies in timing and communication. Sometimes delays in hiring processes create confusion, leaving candidates stuck between opportunities. In those cases, Shweta points out that the responsibility also falls on organisations to run efficient and transparent processes, so candidates aren’t forced into awkward last-minute choices.

The verdict: A mixture of both

Ultimately, this debate doesn’t land neatly on one side. As Chris stresses, integrity and honouring commitments are crucial to building trust. As Shweta highlights, candidates also need to protect their interests until they have certainty. The takeaway? Transparency is everything. If you’re in the grey area of a verbal offer, be upfront about other interviews. If you’ve signed, honor that commitment. Navigating these situations with honesty and integrity not only protects your career but ensures you build strong, lasting professional relationships.

Higher Education: Winning tech talent with employer branding

Higher Education: Winning tech talent with employer branding

Posted September 15, 2025

Specialist tech professionals are spoiled for choice and higher education isn’t always the first place they look. If you’re a TA leader in tertiary education, you’ll know the challenge: you’re competing with fast-moving startups and big-name corporates for the same engineers, data specialists, and product talent.

The good news? You’ve got a unique story to tell. But it needs more than a list of benefits or a flexible work policy. To stand out, your employer brand has to cut through with clarity, authenticity, and a message that speaks to what tech talent actually cares about.

Here’s a practical playbook for higher education leaders looking to sharpen their employer brand and secure the skills they need.

Why brand matters in higher education hiring

Employer brand isn’t just your logo or a tagline. It’s the lived experience of working with you; your values, your culture, and your reputation in the market. For tech professionals, it’s the difference between scrolling past your job ad and actually hitting ‘apply’.

When your brand is strong, you’ll see:

  • More applications from the right candidates
  • Higher retention and engagement across your teams
  • Lower recruitment costs over time
  • A reputation as an employer of choice in a competitive market

And when it misses the mark? You’re left over-relying on contractors, spending big on agencies, and watching the best talent head elsewhere.

3 ways to build a stronger employer brand in higher education

1. Lead with your DNA

Tech professionals want to know the work they do will matter. For universities, this is a big advantage because you aren’t just another corporate. You’re driving research, supporting the next generation of students, and tackling social and environmental challenges at scale.

Bring this DNA to life by:

  • Making it visible online: Share stories of innovation on your channels, whether it’s a new research partnership, sustainability milestone, or digital transformation project.
  • Owning thought leadership: Get your IT and digital leaders speaking at conferences, writing in industry outlets, or posting on LinkedIn. This positions your university as a serious tech player.
  • Highlighting values and wellbeing: DEI programs, wellness initiatives, flexible work models. All of these are key decision factors for tech talent so don’t bury them in policy docs, put them front and centre.

Example: The University of Sydney has consistently showcased its sustainability initiatives and digital research projects in market-facing comms, positioning itself as more than “just a campus job.”

2. Rethink the candidate experience

Your hiring process is your brand in action. If it’s slow, clunky, or impersonal, candidates will assume that’s how your culture feels too.

Best practice means:

  • Clear, human job ads (ditch the jargon and “must have 10+ years in…” wish lists).
  • Fast, transparent communication: Candidates want updates, even if it’s a no.
  • Personal touches: Show candidates you’ve read their CV, tailor interview questions, and connect them with real future teammates.

And don’t forget onboarding. Universities across ANZ are experimenting with AI tools to automate admin-heavy onboarding steps, freeing up People teams to focus on building meaningful human connections from day one.

3. Leverage tech to scale your brand

Higher education can sometimes be seen as “traditional”, but the smart use of tech can flip that perception.

  • Video interviews: Break down geographical barriers and open your doors to talent who may not yet be local.
  • Data-driven insights: Use hiring analytics to understand what candidates want (e.g. sustainability is a top three decision driver for tech hires in ANZ right now).
  • A careers page that works: Include testimonials, videos, and day-in-the-life content from your tech teams. Make it intuitive to navigate and reflective of your real culture.

Example: UNSW’s careers site highlights innovation projects and staff testimonials in a simple, visual format which is far easier to digest than a wall of text.

The takeaway

The race for tech talent is only getting tighter. With a clear, authentic employer brand, universities can punch above their weight against the likes of banks, consultancies, and startups, and land the people they need to keep their institutions moving forward.

At Talent, we help higher education institutions across ANZ attract and secure the right tech talent, building brands that resonate, streamlining hiring processes, and reducing costs along the way.

If you’re ready to strengthen your employer brand and bring top tech talent onto campus, let’s talk.

AI adoption challenges: The hidden roadblocks no one talks about

AI adoption challenges: The hidden roadblocks no one talks about

Posted September 12, 2025

In a previous blog, we explored the leadership gap in AI adoption; the missing strategies, ownership, and clarity slowing progress before it begins. But strategy isn’t the only hurdle.

For many organisations, the real blockers are messy, overlapping, and deeply human: fear, confusion, misalignment, and a general sense of “we’re not ready yet.”

In our latest survey of 864 professionals across Australia and New Zealand, we asked what’s standing in the way of progress. The answers paint a clear picture: while the potential of AI is huge, the practical challenges are still very real.

Strategy gaps continue to stall progress

When asked about the biggest obstacles their organisation faces in keeping up with AI:

  • 41.0% said “no strategy”
  • 40.6% said “unclear goals”
  • 34.4% said “lack of clear ownership”

This is a leadership problem, not a tech one.

You can’t build with AI until you know what you’re building for. And right now, many organisations are still waiting for that direction to come from the top.

“Waiting on strategic policy and approval before AI can be implemented and risks mitigated,” one respondent shares with us.

It’s a sentiment echoed across industries: people want to move, they’re just waiting for direction.

Fear and fatigue are real, and so is trust

AI isn’t just a technical shift, it’s an emotional one. For some employees, the potential of AI feels like a threat rather than an opportunity. And that shapes how it’s received, even in pilot stages.

“I’ve found a resistance from the team due to a concern around job security,” said one participant.

When people don’t understand how AI fits into their role, or worry it could replace them, enthusiasm quickly turns into quiet pushback. The data backs this up:

  • 46.2% cite “security or compliance concerns” as the biggest barrier
  • 10.3% point to “lack of trust”
  • 15.3% say there’s “no training”, which only worsens that anxiety

Burnout, not optimism

A surprising theme emerged in some open-ended responses: fatigue. For many, tech-enabled “productivity” hasn’t always delivered better outcomes, just more pressure.

“Productivity improvements have never helped in the past,” one respondent wrote. “They’ve just led to higher expectations and burnout. AI is not a way forward as a society if we don’t fundamentally rethink our systems.”

It’s a powerful reminder that even the best tools won’t succeed if they’re layered on top of broken processes or disconnected cultures.

The blockers aren’t always what you’d expect

Some of the most-cited reasons for slow adoption weren’t deeply technical, they were practical and immediate:

  • Limited budget – 36.6%
  • Lack of relevance to my work – 16.8%
  • Lack of access to tools – 11.5%

This matters, because it tells us AI isn’t failing because it’s complicated, it’s failing because it hasn’t been meaningfully integrated. If employees don’t see how AI helps them, or they can’t get to the tools at all, progress stops before it starts.

So, what now?

The first step isn’t buying tools, it’s creating space for clarity, communication, and small wins. This might mean:

  • Bringing departments into strategy-setting conversations
  • Addressing fears head-on through honest leadership
  • Investing in real training that shows how AI can make work better, not just faster

Because when blockers are this human, the solutions need to be too.

Access our free report here to explore what’s really slowing AI down and how to start moving forward with clarity and confidence.

What Aussies and Kiwis would (and wouldn’t) take a pay cut for, and why the four-day workweek is still on the table

What Aussies and Kiwis would (and wouldn’t) take a pay cut for, and why the four-day workweek is still on the table

Posted August 31, 2025

What would you take a pay cut for?

It’s the kind of question that sparks a quick gut response but, when you sit with it, the answer gets complicated. For workers across Australia and New Zealand, the trade-offs between salary, wellbeing, flexibility, and values have never been sharper.

According to LinkedIn’s latest Workforce Confidence Index, nearly one in three Australians (32%) say they’d be willing to compromise on their salary if it meant more flexibility. The same number (31%) would do it for stronger values alignment, while 29% would accept less money for a more reasonable workload.

The story slightly shifts when we look at our own survey of 760 professionals. Flexibility ranked lower, with just 23% saying they’d take a pay cut for it. A bigger share of 35% said they’d do it for work-life balance and to avoid burnout. Only 4% cared enough about values alignment to trade salary, while a blunt 38% made it clear: “Pay cut? Hard pass.”

The reality check

The comments from our survey tell a bigger story about trust, trade-offs, and the limits of compromise. A few standouts:

  • “Do the top brass ever get asked this question?”
  • “I have taken a pay cut when I could get a better work-life balance… but most people need the money as cost of living is not reducing.”
  • “I’ll take values alignment, work life balance and flexibility — and expect a pay rise because my productivity will be higher.”
  • “Why would anyone take a cut in pay… when you know the CEO and most of the C-suite are still making bank off your personal efforts? Wake up people and demand what you are worth.”

The sentiment is clear: while people want balance and flexibility, they’re not naïve about the financial pressures they’re under, nor the inequities they see in executive pay. Salary remains a non-negotiable foundation and in today’s economy, many feel they shouldn’t have to choose between being paid fairly and working in a way that sustains them.

Flexibility is still king, but context matters

Since COVID, flexibility has stayed firmly in the “candidate need” category. But our teams see nuance emerging across regions:

  • In Canberra, Managing Director Rob Ning notes: “While flexibility is still a number one priority, we’re seeing more people being open to full-time office work if all other conditions are right.”
  • In Auckland, New Zealand Country Manager Kara Smith adds: “Remote work and work flexibility is still a strong preference. But employers are increasingly requesting 3-4 days in-office. The hybrid tension is back.”

In other words, employees are still prioritising flexibility but it isn’t an automatic “work from home or bust” equation anymore. The conversation is shifting toward how flexibility is structured, and whether it actually helps people live and work better.

Enter: the four-day workweek

This is where the four-day workweek lands. Despite some companies retreating from their experiments, a new Resolve Political Monitor poll shows two-thirds of Australians (66%) support the idea of moving to four days. An almost equal share (64%) back the idea of enshrining flexible work rights in law.

This tracks closely with the priorities we heard in our own survey. Workers are open to new models if it means more balance and less burnout, but they don’t want to see their pay packets shrink to make it happen.

Employers toying with a four-day week as a cost-saving exercise (by trimming pay alongside hours) risk missing the point entirely. As one respondent put it: “If I am asked to take a pay cut, what’s your trade-off?” Workers are watching for genuine investment in wellbeing, not sleight-of-hand productivity hacks.

What employers should take away

The lesson here isn’t that employees are unwilling to bend as many already have; taking lower-paid NGO roles for values alignment, or trading salary for more sustainable workloads. The lesson is that pay cuts are not the lever to pull if you want to win trust, loyalty, and discretionary effort.

Instead, forward-looking employers should be asking:

  • How can we offer flexibility that truly supports work-life balance, not just “two days at home”?
  • What structural changes, like a four-day workweek, could reduce burnout without reducing pay?
  • Are we listening to employee sentiment and closing the perception gap between what workers want and what leaders assume they want?
  • How do we address the equity issue when employees see executives rewarded while they’re asked to sacrifice?

In short: workers aren’t against change. They’re against compromise that feels one-sided.

The bottom line

Australians are clear about what they’d like to see: balance, flexibility, and fair workloads. They’re also clear about what they won’t accept: sacrificing pay while living costs rise and executives continue to profit.

The four-day workweek is part of that bigger story and not just a headline trend, but a signal that employees are hungry for smarter ways of working that don’t come at the expense of their wallets.

If you want to know what else our teams on the ground are seeing in the market, get in touch with our experts.

Why your salary hasn’t increased in 2025

Why your salary hasn’t increased in 2025

Posted August 28, 2025

Australian salaries in 2025

From 1 July 2025, the national minimum wage rose by 3.5%, following the Fair Work Commission’s 2024-25 Annual Wage Review. The new rate will be $948.00 per week, or $24.95 per hour. While this increase provides a boost to lower-income earners, most professionals across Australia and New Zealand are seeing little to no change in salaries compared to last year.

Despite ongoing skills shortages in areas like tech, healthcare, and engineering (and cost of living), wage growth has remained sluggish across the broader workforce. So, what’s really happening in the market?

Economic stability leading to salary correction

The rapid wage hikes that emerged post-pandemic, particularly in high-demand industries, are now flattening. Employers who once felt pressured to offer inflated salaries to secure talent are recalibrating to more sustainable pay structures.

As Katie Kemp, Senior Consultant at Talent Wellington, explains:

“I would suggest that salaries are self-correcting. The pandemic and the perception of people scarcity has passed as the market turns to favour employers; so, rather than flattening, we are seeing salaries come to be more realistic and in line with the expectation of the role and a candidate’s experience.”

At the same time, economic stability is starting to return. The Reserve Bank of Australia (RBA) expects inflation to settle back into the target range of 2–3% by 2025–26. While positive for stability, businesses remain cautious as higher borrowing costs and tighter budgets limit room for generous pay rises.

Alan Dowdall, Practice Lead at Talent Sydney, adds:

“While niche tech skills remain in high demand which calls for competitive rates, overall, salaries remain flat. Clients are consistently asking for salary and market insights to ensure they’re getting value for money, and that seems to be the attitude for most employers who need to stick to their budgets.”

Australian wage growth slowing across the board

According to the Australian Bureau of Statistics, public sector wages rose by only 0.8% in the December quarter of 2024 — a significant slowdown compared to recent years. The Australian Industry Group also forecasts wage growth to ease from 4.1% to 3.9% by the end of 2025–26.

This slowdown is mirrored in private industry. As Edwin Foo, Principal Account Manager at Talent Perth, explains:

“I don’t foresee IT salaries decreasing despite the measured slow-down in annual wage growth. More notably rather, IT roles and positions that become less in demand during the course of 2025–26 will likely remain stagnant, whilst in-demand skill sets within areas of Cybersecurity, Data Science, Artificial Intelligence and Machine Learning for example, will continue to trend upwards due to ongoing shortages, and so depending on the IT specialisation, the wage growth experienced will be relative.”

Another factor at play is the shift from contract to permanent and fixed-term hiring. During the pandemic, contract roles offered lucrative salaries. In today’s market, many professionals are moving back into permanent roles — often with steadier but lower pay.

As Jacaleen Williams, Senior Consultant at Talent Wellington, puts it:

“There’s a mix in the market in terms of expectation and reality. Some areas saw salaries inflated/over-inflated around COVID times, and there is some flattening happening now. The decrease in contracting opportunities has seen people need to switch back to permanent or fixed-term positions.”

What professionals can do

For employees facing stagnating salaries, the lesson is clear: adaptability is key.

Edwin highlights the importance of staying ahead:

“The lesson here, is for IT candidates to remain on the front foot when it comes to upskilling themselves, and pivoting pathways if need be, in order to ensure that they are future-proofing their careers and maximising the salaries that they earn.”

If you’re looking to boost your career prospects in 2025, consider:

  • Upskilling in high-demand areas such as data analytics, cybersecurity, and cloud computing

  • Negotiating perks like bonuses, professional development budgets, or additional leave

  • Exploring roles where demand remains high and skills shortages persist

While headline salary growth may be slowing, opportunities still exist for those who are proactive, adaptable, and skilled in the right areas. The 2025 wage landscape may not deliver the dramatic increases of recent years, but with the right approach, professionals can still position themselves strongly for future growth.

For more of the latest information on the hiring market, top salaries, skills in demand, and more, head to our More than Money Salary Guide 2025.

Why 48% of companies are stuck in AI pilot mode and why it’s a good thing

Why 48% of companies are stuck in AI pilot mode and why it’s a good thing

Posted August 27, 2025

Almost half of the organisations we surveyed (47.6%) say their current stage of AI adoption is “experimental” or “pilot”, and our experts say this is more a sign of progress than it is of failure.

Despite the hype, AI isn’t a plug-and-play solution. The jump from running a prompt in ChatGPT to deploying AI in a secure, production-ready environment is huge. As Jack Jorgensen, Data, AI & Innovation General Manager at our project delivery arm Avec, puts it:

“There’s a big difference between punching in a search query and building something deterministic and robust enough to run in production systems.”

In other words: it’s easy to experiment with AI, but much harder to operationalise it.

The pilot phase: What’s really going on

When AI went mainstream, business leaders rushed to explore how it might improve productivity, automate tasks, and reshape work. But most quickly hit a wall. Why? Because the magic wears off when you move from ideation to implementation.

Jack assures business leaders, “Having organisations stuck in that pilot stage isn’t a bad thing. It means they’re going out and finding the limitations of the technology and where it can be applied really well.”

This experimental period isn’t just about proving AI works. It’s about learning:

  • Where it doesn’t work
  • Where your data isn’t good enough
  • Where processes aren’t ready, and
  • Where your people need upskilling

This discovery stage is critical to uncover what needs fixing before scaling, and will help businesses avoid wasting time and budget building the wrong thing.

The risks of skipping this step

In the rush to “not fall behind,” some organisations are pushing AI into production too fast. That often leads to:

  • Tool sprawl and shadow AI
  • Security breaches (like the now-infamous CRM upload into ChatGPT)
  • Oversold outcomes with underwhelming results, and
  • Burnt-out teams working with systems they don’t trust or understand

As Jack puts it, “If you’re jumping in without looking, you’re probably going to break your ankles on the way into the pool.”

The smart move is to slow down, run your pilots, and get clear on the problem you’re trying to solve.

What good looks like in the experimental phase

Here’s what leading organisations are doing right now:

  • Running small pilots with clearly scoped outcomes
  • Auditing internal AI usage to assess risk and opportunity
  • Building foundational data and security infrastructure
  • Educating teams on prompt design, ethics, and governance
  • Documenting learnings to shape future strategy

This time is your opportunity to build AI capability without breaking things.

Final thought: Pilot is not a plateau

Staying in pilot mode doesn’t mean you’re behind. It means you’re taking it seriously. Rushing to production with shaky data, no security posture, and no clear goals? That’s what real failure looks like.

As JP Browne, Practice Manager from our branch in Auckland, states: “Pilot mode isn’t the problem. It’s the companies skipping this step who are going to run into trouble.”

If your organisation is experimenting with AI right now, you’re exactly where you should be.

Want to find out what else our AI survey revealed? Access the full report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team. Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.