Why every business needs an AI strategy (even if AI isn’t the strategy)

Why every business needs an AI strategy (even if AI isn’t the strategy)

Posted December 1, 2025

AI is not a strategy, but you still need one

When ChatGPT first hits the scene, it felt like magic. You typed in a question and out came paragraphs of seemingly human responses. That “wow” moment sparked a wave of experimentation across industries.

However, Jack Jorgensen, General Manager of Data, AI & Innovation at our IT delivery arm, Avec, points out:

“There’s a big difference between punching in a search query and building something deterministic and robust enough to run in production systems.”

And that difference is exactly where many businesses get stuck. According to our latest AI survey, nearly half (47.6%) of organisations are still in the experimental pilot stage. This isn’t inherently bad. Testing is critical, but it highlights a bigger issue: too many companies are running pilots without a clear strategy.

The hammer and nails problem

One of the most striking survey responses captured the mindset perfectly: “AI is a solution to some business needs. It’s not an objective or self-evident value proposition in its own right.”

Jack expands on this:

“What we’re seeing is a shift from the traditional IT delivery model, where you start with the value proposition and business case, then source the right tool. With AI, too many leaders are saying, ‘We’ve got this new hammer, now where are the nails?’”

That approach leads to wasted investment, disjointed projects, and technology that doesn’t deliver value. AI may not be the strategy, but without a strategy, you’re setting yourself up to fail.

Why “no strategy” is not an option

Some executives have argued that AI doesn’t need a dedicated strategy, comparing it to something as basic as staplers or office chairs. But as Jack explains, this is dangerously short-sighted:

“AI is a tool, yes. But it’s a tool that comes with new cybersecurity threats, compliance challenges, and ethical considerations. Ignoring it leaves your business exposed.”

From phishing attacks to vulnerabilities in AI-generated code, the risks are real. Without a roadmap, companies open themselves up to reputational damage, compliance breaches, and spiralling costs.

As JP Browne, Practice Manager from Talent Auckland puts it bluntly:

“Burying your head in the sand is not an option. AI is here, one way or another, and every organisation will be affected by it.”

The IT department squeeze

Another dynamic uncovered in our research is the unusual role IT departments are playing in AI adoption. Traditionally, IT has been a service function, enabling strategy set elsewhere in the business. But with AI, the tables have turned.

“Executives are excited about AI and pushing hard to adopt it, but IT leaders are often the ones hitting the brakes,” JP notes. “They’re saying: yes, this is powerful, but we need to address security, infrastructure, and compliance first.”

That tension is leaving many organisations in limbo. The money is there. The executive interest is there. But without a strategic framework to prioritise use cases, align with business goals, and manage risk, progress stalls.

Building an AI strategy that works

So, what does an effective AI strategy look like? It doesn’t have to be a 50-page blueprint. In fact, Jack recommends starting simple:

  1. Define the business problem. Don’t adopt AI for the sake of it. Be clear about the challenge you’re trying to solve.
  2. Set guardrails. Establish data security, compliance, and ethical guidelines before scaling experiments.
  3. Start small, but with intent. Pilots are valuable, but only if they feed into a roadmap for production-ready solutions.
  4. Assign ownership. Decide who is accountable for AI adoption across the business. Avoid the “hot potato” problem where no one owns it.
  5. Review and adapt. A strategy isn’t fixed. As AI evolves, so should your approach.

“Having no AI strategy is worse than having the wrong one,” says Jack. “At least a flawed strategy can be corrected. No strategy leaves you wide open.”

From fear to opportunity

Much of the fear surrounding AI, from job loss to ethics and compliance, stem from uncertainty. And uncertainty thrives where there’s no plan.

With the right strategy, AI becomes less of a threat and more of a force multiplier. It can streamline workflows, surface insights, and free people up from repetitive tasks to focus on higher-value work. But those benefits only come when you align AI projects with business objectives and set the right foundations.

As JP concludes:

“AI can absolutely change the game for productivity and competitiveness. But only if you stop reacting, start planning, and make it part of your business strategy.”

AI is not the strategy. But without a strategy, AI is just hype. Organisations that take the time to define their approach, even if it starts small, will be the ones that cut through the noise, manage the risks, and realise real business value.

If you’re ready to source in-house AI capability, get in touch with our team. Or, if you’re looking to kick off a data project, reach out to Jack’s team at Avec.

AI in the private sector: Moving fast, but who’s steering?

AI in the private sector: Moving fast, but who’s steering?

Posted November 30, 2025

While government agencies have to carefully navigate AI changes while maintaining dependability, the private sector can move like a high-speed bullet train in comparison; faster, more agile, and ready to change direction. However, when it comes to AI, speed without strategy can be as dangerous as standing still.

Our latest AI survey with 864 business leaders and tech professionals shows that 48% of organisations overall are still in the experimental or pilot stage of AI adoption. In the private sector, this can be exciting with tools being trialled, data flows unlocked, and quick wins celebrated… But without clear ownership and governance, experimentation can quickly spiral into risk.

The private sector’s AI advantage

Private organisations have more flexibility than government agencies, which means they can:

  • Pilot AI use cases without length approval processes
  • Redirect budgets and talent more quickly
  • Partner with vendors or start-ups to accelerate capability

In-house AI expert Jack Jorgensen, General Manager of Data, AI & Innovation at Avec, explains, “In the private sector, leadership can decide today that AI is a priority, and tomorrow there’s a project team in place.” This agility allows them to capitalise on emerging opportunities, from automating repetitive tasks to improving customer experience.

The strategy gap

It’s important to note that speed is an advantage, until it isn’t. Our survey data shows that:

  • 41% of organisations cite “no strategy” as a major obstacle to AI adoption
  • 41% say “unclear goals” are holding them back
  • 34% cite “unclear ownership”

“We’ve seen this before with automation. Without a cross-business strategy, AI gets walled into a single department and it never reaches its full potential,” says Jack.

In many cases, the enthusiasm is there at the executive level, but ownership is unclear. Is AI a technology initiative? A business transformation project? A data function? Without a clear answer, adoption can stall or become fragmented.

Security and governance risks

Organisations in the private sector are split in their approach to AI security:

  • 3% have restrictions or policies limiting the use of external AI tools
  • 9% use tools like ChatGPT with minimal governance
  • 9% are exploring secure, fit-for-purpose AI solutions
  • 11% have implemented secure, in-house AI capability

Practice Manager from our office in Auckland, JP Browne observes, “You either lock it down completely or let it run free, and the private sector is doing both, often within the same organisation.”

The role of talent in AI maturity

AI success in the private sector is often tied to talent strategy, and the current roles in highest demand according to our recruitment experts include:

  • Data engineers and analysts
  • Systems engineers to build infrastructure
  • Change managers to drive adoption across business units

But while technical capability is critical, so is critical thinking and the ability to bridge technical and commercial priorities.

What private sector leaders should do next

  • Define ownership and accountability for AI strategy
  • Prioritise secure data infrastructure before scaling
  • Pilot AI projects with clear and measurable goals
  • Invest in cross-functional teams that blend technical skill with business insight
  • Develop a company-wide AI policy that balances innovation with risk management

The private sector’s ability to move quickly is a strength, but only if it’s guided by clear strategy, governance, and talent. The leaders in AI adoption will be those who can balance the hype and excitement of rapid innovation with the discipline to scale it safely and sustainably.

If you’re looking to hire AI and data talent, get in touch with our team. Or if your business is planning a high-impact data, AI or innovation project, drop a message to Jack’s team at Avec.

Insurance and AI: Why humans still need to be in the loop

Insurance and AI: Why humans still need to be in the loop

Posted November 26, 2025

The insurance industry has long been a pioneer in automation. Fraud detection, claims processing, and risk modelling all lend themselves to technology, and AI is simply the next layer. However, it brings with it new complexities, risks, and opportunities.

In our recent AI survey, 40.3% of financial services respondents (including insurance) said their organisation is still in the experimental or pilot stage of AI adoption. And while early wins are clear, there’s a universal truth in insurance: you can’t take humans out of the loop entirely.

From automation to AI: An evolution, not a leap

JP Browne, Practice Manager from Talent Auckland says, “Insurance has been using automation for years and AI just extends what’s possible, from approving low-value claims instantly to extracting insight from thousands of documents.”

Examples of early AI adoption in insurance include:

  • Automating claims approvals for low-value, low-risk cases
  • Using AI to scan and summarise large volumes of customer documents
  • Generating insights from call centre transcripts to improve service quality

These targeted use cases reduce cost, save time, and free human experts for more complex work.

Why human oversight still matters

AI may be fast, but it can’t (yet) replace human judgement in high-stakes decisions.

“If somebody’s house is on fire, you can’t let a bot decide whether to let the claim go through,” says JP.

In regulated industries like insurance, compliance, ethics, and customer trust demand human sign-off for:

  • Large or complex claims
  • Disputed cases
  • Situations with incomplete or ambiguous data
  • Potential fraud indicators

The security and compliance factor

As part of the broader financial services sector, insurance organisations share similar AI adoption challenges, particularly around security and compliance.

Our survey findings show:

  • 2% said security or compliance concerns are their biggest barrier to regular AI use
  • 3% said their organisation has restrictions or policies in place limiting the use of external AI tools
  • 9% are exploring secure, fit-for-purpose AI solutions
  • 11% have developed or implemented their own secure, in-house AI capability

Some insurers are even moving back to on prem to maintain tighter control of sensitive data and meet stringent regulatory requirements.

The data quality challenge

Insurance leaders know that AI is only as good as the data it’s fed. “We’re seeing a big rise in demand for data engineers and analysts, because poor-quality data kills AI performance,” observes JP.

This focus on data readiness is driving workforce changes in:

  • Systems engineering
  • Data engineering and analytics
  • Data governance and compliance roles

What insurance leaders should so next

  • Identify low-risk AI use cases that deliver measurable ROI
  • Maintain human oversight for complex or high-value claims
  • Strengthen data governance and quality
  • Build secure infrastructure for AI deployment
  • Create clear policy frameworks for AI use across teams

AI can process claims in seconds and surface insights no human could spot, but it can’t replace the trust built through human expertise. In insurance, the leaders won’t be those who hand decisions over to machines, but those who combine AI’s speed with human empathy, ethics, and accountability. The winning formula? Let AI handle the heavy lifting, while people make the calls that truly matter.

Want to find out what else our AI survey revealed? Access the full report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team. Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.

Is your fleet costing you more than you think?

Is your fleet costing you more than you think?

Posted November 24, 2025

When budgets tighten and sustainability goals loom large, most councils zero in on headcount, procurement, and property costs.

But what about your fleet?

For many organisations, the fleet is the ultimate blind spot, an invisible cost centre quietly draining millions. Yet, with the right data and meaningful insights, it can become a powerful lever for savings, sustainability, and smarter decision-making.

That was the key message from our recent webinar with Fleetonomics™ experts Karen Whitehouse and Melvin Worth, who joined our Head of Government here at Talent, Steve Tompkins, to unpack how councils can transform their fleet from a hidden expense into a strategic advantage.

The hidden value sitting in your data

GPS logs, activity reports, booking systems… Most councils are swimming in vehicle data, but few are truly using it. Karen and Melvin call this the “untapped goldmine” of fleet management.

“We’ve helped councils uncover an average of 20% optimisation opportunity in their fleets, without disrupting business-as-usual,” said Karen.

The trick isn’t to collect more data, but to make sense of what you already have. When you connect your telematics, finance, and asset management systems into one source of truth, patterns emerge: underused vehicles, inefficient routing, even “ghost” cars sitting idle for months.

Busting fleet myths that cost you millions

The Fleetonomics team often sees the same misconceptions play out again and again:

  • “We need more vehicles.”
  • “If it’s depreciated, it’s free to keep.”
  • “Our Hiluxes are essential.”

Sound familiar?

In reality, many fleets are overcapitalised and under-utilised. One council discovered their vehicles were only used a handful of times a week, yet were fully assigned to individuals.

Another realised that peak summer “demand periods” didn’t actually exist once they analysed utilisation data.

“The operational voice can be loud,” Melvin noted. “Without evidence-based analysis, it’s easy for anecdotes to drive costly decisions.”

Where to start: Your ‘why’

Before you optimise anything, start by asking: why now?

  • Is it cost reduction?
  • Sustainability goals?
  • Public perception or compliance pressures?

Getting alignment on the ‘why’ across leadership is critical. Fleet optimisation is a change program, not a procurement exercise. Once that purpose is clear, you can bring your people, and your data, on the journey.

Turning data into action

Good fleet data tells a story: where vehicles go, how often, and why. When that story is clear, conversations shift from assumptions to actions.

Karen and Melvin recommend:

  1. Consolidate your data – Create one version of truth that includes GPS, finance, booking, and maintenance records.
  2. Interrogate the patterns – Identify waste (idle vehicles, over-spec’d models, duplicate assets).
  3. Engage your stakeholders early – Optimisation only works when fleet users are part of the solution, not the surprise.

“When data meets dialogue, that’s when real change happens,” Karen said. “Once users understand the ‘why,’ you get faster adoption, less pushback, and better long-term results.”

Case in point: One Council’s $4.5m wake-up call

A district council approached Fleetonomics after senior leaders realised they couldn’t answer basic questions like: “How many vehicles do we have?” or “Are they fit for purpose?”

After a full fleet audit and utilisation review, the results spoke for themselves:

  • 27% fleet overcapacity identified
  • 17% reduction achievable with no operational impact
  • $4.5M in long-term savings unlocked
  • 87% transition to EVs planned, plus infrastructure fully funded from savings

By challenging assumptions and unifying data, they turned confusion into confidence and built a blueprint for others to follow.

Keep the conversation moving

Fleet optimisation isn’t a one-and-done project. It’s a living process.

As technology evolves, staff change, and sustainability targets accelerate, your strategy should too. Karen and Melvin suggest revisiting your data quarterly, especially in the early stages.

Because the councils that stay agile, those that question entrenched thinking and act on evidence, are the ones turning fiscal waste into measurable progress.

You can’t manage what you can’t see

But when you make your fleet visible, you don’t just save money, you create capacity for innovation, sustainability, and smarter decision-making.

So, is your fleet costing you more than you think?

There’s only one way to find out: start with the data.

Want to discuss how we can help? Reach out today.

The Trans-Tasman talent shift: Why NZ needs a new workforce strategy

The Trans-Tasman talent shift: Why NZ needs a new workforce strategy

Posted November 18, 2025

New Zealand’s infrastructure and energy pipeline is booming but the people needed to deliver it are in short supply. With record numbers of skilled Kiwis moving overseas, and Australia’s own talent shortage intensifying, organisations here face unprecedented competition for technical and project delivery expertise.

This isn’t just a numbers issue. It’s a race for knowledge, capability, and experience.

The great Kiwi outflow

Over the past year, almost 72,000 Kiwis have relocated overseas — more than half (58%) heading to Australia. For sectors like energy, utilities, and infrastructure, this isn’t a marginal shift. Every engineer, project manager, or digital specialist leaving the workforce takes with them years of institutional knowledge and practical experience.

Even more challenging, the migration overlaps with an ageing workforce. A large portion of technical talent is nearing retirement, leaving gaps that can’t be filled by headcount alone. More than just “hiring”, organisations must think strategically about knowledge transfer, capability rebuilding, and workforce renewal.

Australia’s market pressure

Australia is facing similar talent supply issues, particularly in high-voltage energy, specialised civil engineering, and digital infrastructure. Organisations there are competing fiercely for people with Transmission Extra High Voltage (EHV) experience — skills that are scarce in both countries.

And when Australian employers can’t find talent locally, New Zealand becomes a “hunting ground” for engineers and specialists. Beyond that, consultancies are increasingly tapping Singapore, Malaysia, and the Philippines to support regional projects. For New Zealand, this adds both opportunity and risk: demand for skilled talent is now regional, not local, and competition is growing fast.

Skills scarcity isn’t sector-specific

The convergence of multiple industries competing for the same skill sets is creating a national talent pressure point. Energy, utilities, telco, transport, and water infrastructure projects are all vying for engineers, digital specialists, and project managers.

It’s no longer enough to focus on sector-specific pipelines and companies are competing across industries and borders for the people who can make projects happen. This highlights the importance of strategic workforce planning, capability development, and early talent engagement.

The opportunity: plan for capability, not just headcount

While the pressures are real, they also create an opportunity to rethink workforce strategy. Organisations that proactively capture knowledge, upskill existing teams, and design career pathways will be better positioned to navigate both the local and regional talent landscape.

By viewing workforce challenges as a strategic issue, New Zealand can move from a reactive hiring approach to building sustainable capability that ensures projects are delivered efficiently, safely, and to future-proof standards.

Learn more about the talent shaping New Zealand’s delivery future and download our Infrastructure & Utilities Snapshot for insights on workforce trends, cross-sector competition, and the skills needed to meet the country’s infrastructure ambitions.

Beyond the grid: How AI is reshaping NZ’s infrastructure

Beyond the grid: How AI is reshaping NZ’s infrastructure

Posted November 4, 2025

AI is no longer a tech-sector story, but a project and infrastructure one.

As New Zealand doubles down on energy transition and network modernisation, the same organisations managing critical utilities are also navigating AI transformations. The result is a compounding effect with energy and water demand surging, digital transformation accelerating, and the skills needed to deliver both converging faster than the market can keep up.

Energy is now the bottleneck in the AI race

Globally, AI innovation is driving unprecedented demand for electricity. Anthropic’s Build AI in America report warns that the US AI sector will require at least 50 GW of electric capacity within three years, which is more than New Zealand’s entire generation output several times over. Data centre buildouts are now competing directly with renewable generation projects for power, land, and transmission access.

By contrast, China added over 400 GW of new power to its grid in 2024, creating a massive infrastructure advantage and is a reminder that the AI race isn’t just about algorithms, but about who controls the energy supply that powers them.

The irony? While global power demand soars, AI itself is getting more efficient. Google reports that energy consumption per prompt has decreased 33 times in the past year, and water consumption per interaction has fallen to a fraction of earlier predictions. While AI is learning to use less, the systems supporting it still need more.

For New Zealand, this tension creates a unique challenge: modernising networks fast enough to keep pace with global digital demand while pursuing sustainability goals at home.

AI adoption is accelerating unevenly

According to newzealand.ai, 82–87% of New Zealand businesses now use AI tools for productivity, and 69% of consumers do the same. Adoption is led by transport, media, tech, and public services, but energy, utilities, and telco aren’t far behind.

Our own research shows that 83.3% of leaders in the energy and resources sector and 69.3% in telco believe AI will positively impact their roles within two years. However, half of these organisations are still in the experimental or pilot stage. The shift from exploration to enterprise-level integration is only just beginning.

Encouragingly, around 64% of energy and resources organisations and 75% of telcos already consider AI a strategic priority. For most, transformation is starting in areas where automation can drive immediate efficiency and cost benefits (such as customer service and operations) before expanding into asset management, project delivery, and predictive maintenance.

It’s clear that the conversation has moved beyond “if” to “how fast”.

Where opportunity meets capability

As AI becomes embedded in the backbone of how we build, maintain, and manage infrastructure, the demand for digital talent across engineering and energy is exploding. Data scientists, automation engineers, and AI project leads are now as critical to delivery as civil or electrical engineers.

And these skills don’t stay in one lane. The same professionals helping utilities modernise their networks are being hired by telcos, energy companies, and even transport authorities. And the overlap continues to grow.

For hiring leaders, this means rethinking workforce strategy. It’s not enough to source talent from your own sector anymore. The organisations staying ahead are those building cross-industry capability, creating hybrid roles, and embedding AI literacy across all levels of the workforce.

The next delivery advantage: human + machine

AI won’t replace the workforce that builds New Zealand’s future, but it will augment it.
And it’s about amplifying a person’s impact, freeing specialists to focus on critical decision-making while machines handle data-heavy analysis and monitoring.

The next delivery advantage will belong to organisations that combine technical capability with digital intelligence, and those who see AI not as a standalone initiative but as an enabler of better, faster, and more sustainable outcomes.

Get the insights on what’s shaping New Zealand and download our latest Infrastructure & Utilities Snapshot.

Why government AI adoption is slow and why that’s a good thing

Why government AI adoption is slow and why that’s a good thing

Posted October 28, 2025

When it comes to AI adoption, government is in no hurry. And that’s exactly the point.

In our latest AI survey, 50% of respondents working in the public sector said their organisation is still in the experimental or pilot stage of AI use. Compared to many private-sector industries, where early adoption is already shifting workflows and job design.

While at first glance, it might look like governments are falling behind, there’s good reason they move differently.

Why governments move slowly on AI

Government agencies aren’t built to “move fast and break things”. According to our in-house AI expert, Jack Jorgensen, General Manager of Data, AI & Innovation at our project delivery arm Avec, “There’s a big difference in the way governments need to operate versus private enterprise. They’re designed to be stable, reliable, and robust.”

A government body’s core responsibilities of public services, infrastructure, safety and regulation demand caution, reliability, and trust. So, when your ‘customer’ is the entire population, the stakes are high. Errors can impact millions, data breaches can threaten national security, and AI decisions must stand up to legal and public scrutiny.

The reality on the ground

In many agencies, AI is still in the exploratory stage:

  • Small, controlled pilots
  • Internal tools tested in low-risk areas
  • Strong focus on compliance and security requirements
  • Longer approval cycles for procurement and deployment

“Policy-making roles are challenging to automate and in highly regulated environments, finding relevant and safe use cases understandably takes time,” says Jack.

Security and compliance non-negotiables

Government respondents ranked “security and compliance concerns” on par with financial services and is no surprise given the sensitivity of the data they hold.

Some agencies are also grappling with:

  • Lack of relevant applications – 20.2% said AI doesn’t apply to their current work
  • Ownership uncertainty – it’s widely unclear who should lead AI initiatives
  • Siloed operations – meaning slow cross-department collaboration

Why this pace makes sense

Jack says, “If anyone’s surprised government is slow on AI adoption, they don’t understand the role. The systems are meant to be dependable, not bleeding edge.”

While speed matters for the private sector in competitive markets, stability matters more than anything in public service. AI in government must work every time, be explainable and auditable, serve the public interest, and align with legislation and policy.

So, what can government do next?

  • Continue piloting in low-risk and high-value areas
  • Invest in AI literacy for leadership and frontline teams
  • Create clear ownership and governance frameworks
  • Learn from private-sector implementations without importing their risk appetite
  • Build secure, compliant infrastructure before scaling

The private sector can afford to experiment, and government can’t, so caution at this stage isn’t failure. In an era where public trust is fragile, deliberate and well-governed AI adoption is the only responsible path.

Want to explore the sector-by-sector data? Access the full report.

If you need to hire talent for AI or data roles in public service, get in touch with our team. Or if you want to plan a secure AI pilot, partner with Jack’s team at Avec.

The infrastructure race: Building a workforce for what’s coming

The infrastructure race: Building a workforce for what’s coming

Posted October 27, 2025

New Zealand is at the start of one of its biggest delivery decades and the next two years will determine how well we adapt. Billions are being invested across energy, utilities, and transport, but the workforce to deliver it all is stretched thin. Project backlogs are growing, skills are overlapping across industries, and the people who built the last generation of infrastructure are starting to retire.

Not just another investment cycle, this is potentially a once-in-a-generation test of delivery capability.

A nation under construction

Across New Zealand, major infrastructure and energy projects are ramping up. From the City Rail Link to the Central Interceptor, from solar and wind developments to regional water upgrades, the country’s delivery pipeline is swelling, and so are expectations.

The scale of the challenge is clear: all of our current systems, networks, and capability were built for a different era. Transmission and distribution networks weren’t designed for today’s pace of renewable energy generation. Billions are being committed to new assets, but integrating them into existing grids remains slow, complex, and underfunded.

The result? A delivery window packed with both opportunity and risk. Organisations must not only deliver projects faster but do so while modernising the very systems they depend on.

Workforce renewal is now a critical risk and opportunity

A significant portion of New Zealand’s infrastructure and energy workforce is nearing retirement age, taking decades of institutional knowledge with them. And isn’t just a numbers game. Workforce renewal is about capturing experience, rebuilding capability, and making these industries attractive to the next generation of specialists.

At the same time, there’s a record number of almost 72,000 Kiwis moving overseas — 58% of them to Australia — adding pressure to already scarce talent pools. Competing for skilled project managers, engineers, and digital specialists has become a national challenge.

The good news? This challenge also creates space for innovation in how we train, partner, and attract talent. Workforce renewal can be a powerful driver of transformation if leaders act early and strategically.

The skills race isn’t industry-specific anymore

When every major project needs engineers, data specialists, project managers, and digital delivery talent, competition for skills stops being an industry problem. It becomes an economy-wide one.

Energy companies are competing with telcos for automation talent. Utilities are hiring the same project delivery specialists as major transport programmes. Data centres are drawing from the same electrical and civil engineering pools as renewable developers.

The line between technical industries has blurred and companies that understand this overlap are the ones building smarter, faster, and more sustainably.

At Talent, we see this shift every day. The most successful organisations are those treating workforce strategy as a competitive advantage by planning early, building flexibility into delivery teams, and investing in partnerships that blend capability and capacity.

Rebuilding capability for the next 30 years

Adapting to what’s next will be the hardest and most rewarding thing New Zealand’s infrastructure and utilities sectors do.

Delivering tomorrow’s projects will take more than hiring replacements. It means rebuilding technical capability, embedding digital fluency, and creating career pathways that attract diverse, emerging talent — from women in engineering and technology, to meaningful Māori and Pasifika representation across technical and leadership roles.

That’s how you build resilience, not just for the next two years, but for the next two decades.

The opportunity ahead

The next two years will define how well New Zealand delivers — not just its projects, but its future workforce. The infrastructure race is on, and every leader faces the same challenge: how to scale and deliver fast without compromising quality, safety, or capability.

The organisations that’ll win are the ones that act now, by treating workforce planning as strategy, not an afterthought.

Get the full picture. Download our latest Infrastructure & Utilities Snapshot for insights on workforce trends, investment priorities, and the roles critical for project

AI is quietly reshaping workforce planning – here’s what’s changing first

AI is quietly reshaping workforce planning – here’s what’s changing first

Posted October 20, 2025

Although it isn’t yet a tidal wave, the tide is turning on workforce planning.

In our latest AI survey with 864 business leaders and tech professionals, when asked whether AI was impacting workforce planning over the next 12-18 months:

  • 12.1% said they’re already using AI to evolve roles or reduce manual work
  • 23.3% said they’re exploring how AI may shift the skills they hire for
  • 32.5% said it’s on their radar, but not yet a focus
  • 22.9% said AI isn’t impacting their workforce planning at all

With more than a third of organisations actively exploring or implementing changes due to AI, it’s worth exploring what these changes are for both employers and candidates.

The roles rising first

According to JP Browne, Talent Practice Manager and recruitment expert in the Auckland market, “We’re not seeing a rush to hire ‘AI engineers’, but we are seeing demand for systems engineers, data engineers, and data analysts, because poor data breaks AI.”

Before deploying AI at scale within your business, you need solid data foundations. This means hiring for:

  • Data quality and governance
  • System integration
  • Infrastructure build-out
  • Security and compliance enablement

In other words, it’s those behind-the-scenes roles that make AI functional, reliable, and safe.

Recruitment reality check

AI is also changing recruitment itself, both in how candidates present themselves in the application process and how employers assess talent.

“Some candidates are using ChatGPT to craft flawless cover letters, but the actual CV doesn’t match the role. So it’s forcing recruiters to dig deeper and reintroduce more human screening,” observed JP.

We’re seeing:

  • More AI-assisted job applications
  • A shift away from simple keyword-matching tools
  • Greater emphasis on human-in-the-loop hiring decisions

The personal productivity play

For some employees, they’re already using AI to improve their own output without waiting for top-driven organisational change.

From automating reporting to building project estimates, self-taught AI adoption is becoming a competitive edge for individuals. However, it creates uneven capability across teams, and potential risk if it’s unsanctioned.

“You can’t wait on your organisation to set the rules. You need to learn how to use AI yourself, but use it responsibly,” says JP.

Why leaders need to act now

If your workforce planning hasn’t factored in AI, you risk falling behind in:

  • Skill readiness in both hiring and internal development
  • Employee engagement as workers expect modern tools their industry peers are using
  • Efficiency gains where your competitors will find them first
  • Risk management as unsanctioned AI use is already happening

JP emphasises, “AI is here, in your phones, in your search engines, in your workflows. Ignoring it doesn’t make it go away, it just leaves you unprepared.”

The shift right now feels subtle, until it’s sudden. The organisations making small, strategic moves today will be the ones ready for the bigger shifts tomorrow.

If you need help building AI-ready hiring strategies, get in touch with our team.

AI is a security risk and that’s why smart businesses are cautious

AI is a security risk and that’s why smart businesses are cautious

Posted October 15, 2025

It’s less about fearmongering and more about smart risk management.

In our recent AI survey, 46.2% of respondents named “security and compliance concerns” as the biggest barrier preventing wider AI use, and our experts say they’re absolutely right to be cautious.

“If I could rate that stat above 100%, I would. Security and compliance should be front of mind. Full stop,” says Jack Jorgensen, General Manager of Data, AI & Innovation at Avec, our project delivery arm.

AI is unlike any other tech shift we’ve seen. It’s fast-moving, largely unregulated, and capable of generating unexpected and sometimes dangerous outputs. And when sensitive company data is involved, that’s not a risk you can afford to take lightly.

The tools are already inside the business

If you haven’t formally adopted AI, your people probably already have.

  • 38.3% of respondents said their organisations currently have restrictions or policies limiting AI use.
  • But 28.9% said AI tools like ChatGPT are being used with minimal control or governance.
  • And 8.9% said there are no policies at all.

For data-heavy, regulated environments like financial services, insurance, or government, that’s a recipe for disaster.

“The usage of AI is prolific in every single organisation. It kind of just happened and now execs are scrambling to catch up,” says our recruitment expert JP Browne, Practice Manager from our Talent office in Auckland.

Real-world fails: AI gone rogue

We’re already seeing examples of AI being used recklessly:

  • A major NZ business uploaded their full CRM into ChatGPT to “get customer insights”
  • A software platform built entirely with AI-generated code suffered a data breach leaking 700,000 passports
  • Deepfakes and synthetic media are being weaponised, and legal systems haven’t caught up

“It’s such a fast-moving beast. You can make a critical mistake without even knowing you’ve made it,” says JP. The caution around AI isn’t about shutting adoption down but about saying yes in the safest way.

Why the AI risk is so unique

AI security isn’t just about infrastructure, but:

  • Data exposure: What is your staff putting into AI tools?
  • Model misuse: Can someone prompt the system to give access or misinformation?
  • Compliance blind spots: Are you meeting industry requirements?
  • Auditability: Can you trace how a decision was made by the system?

According to Jack, “We currently don’t know what the future holds in security breaches and attack vectors. The more people thinking about this, the better.”

What smart organisations are doing

Leading teams and businesses are:

  • Establishing clear AI policies and risk frameworks
  • Educating employees on what AI can and can’t do (and what to never input)
  • Limiting exposure by controlling which tools are sanctioned
  • Bringing data back on-premises in high-risk industries to reduce external risk
  • Running regular training quarterly or biannually to keep up with the rapidly developing technology

“Security posture, policy, and training. That’s your baseline. If you don’t have those three, don’t go near production-level AI,” says Jack.

Security is not the brake, it’s the steering wheel

Too many organisations treat security as something that slows innovation. When in reality, it’s the only thing that makes safe and scalable innovation possible.

“When you’re managing billions in funds, or customer identities, AI can’t be a black box. It needs to be understood, controlled and governed,” says JP.

So, if you’re exploring AI without a security posture, you’re not innovating. You’re gambling.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our recruitment team. Or ready to launch an AI or data project? Partner with Jack’s team at Avec.

The real fear behind AI at work isn’t job loss – it’s trust

The real fear behind AI at work isn’t job loss – it’s trust

Posted October 9, 2025

AI isn’t just changing how we work, but also how people feel about work.

Our latest AI survey revealed that while one in four professionals worry about job displacement, most concerns around AI go far deeper than that:

  • 60% are worried about ethical or compliance risks
  • 58% fear loss of human oversight
  • 57% are concerned about inaccuracy or hallucinations
  • 31% say integration is a challenge

What this tell us is people aren’t just worried about being replaced by AI, but many are concerned that the people running it don’t fully understand the risks.

Why workers are nervous

“You can’t bury your head in the sand. AI is affecting workflows and job design, and people are understandably unsure where they fit,” says JP Browne, Practice Manager at Talent Auckland.

Everywhere you look, there are bold statements about how AI will transform everything but in the real world, most employees are being left in the dark. Are they allowed to use ChatGPT? Are their roles changing? Will AI make their jobs harder, not easier?

The lack of communication is creating fear and can drive resistance among teams, potentially stalling AI adoption.

It’s bigger than just job loss

Jack Jorgensen, General Manager of Data, AI & Innovation at our IT project delivery arm, Avec, reassures, “We’re not seeing mass displacement. We’re seeing evolution. The risk is overstated but the change is real.”

It’s true that repetitive, manual, and rules-based work will go, but for most knowledge workers, the shift is about augmentation rather than replacement.

Still, that doesn’t mean people feel safe and JP shares that among workers, “The fear I’m seeing isn’t ‘I’ll lose my job’, it’s ‘I don’t understand this tech, and I don’t trust how it’s being used.’”

Ethics, oversight and deep uncertainty

One of the biggest risks leaders underestimate? The hidden ethics of AI.

  • Is your model biased?
  • Was your training data ethically sourced?
  • Can a customer tell when they’re dealing with a bot?
  • What happens when a mistake causes harm?

JP shares, “The ethics piece is huge. Especially in sectors like insurance.” And Jack echoes, “No one wants to end up on the front page because a bot denied someone’s surgery.”

Governments are slow to regulate, so this means ethical responsibility falls on individual organisations and most aren’t ready.

The more we automate, the more human oversight will matter and organisations will need people with critical thinking skills and not just the ability to prompt engineer.

“There was a company that deployed an AI-generated software stack. It looked great until it leaked 700,000 passports. That’s not innovation, that’s negligence,” shares Jack. Trust, transparency, and responsibility are necessary considerations for your AI strategy.

What leaders can do now

  • Involve your people early in decisions around tooling, automation, and processes
  • Invest in ethics and risk literacy, not just tech skills
  • Ensure humans are in the loop especially where decisions impact people’s lives

According to JP, “You don’t have to be a guru. But you can’t bury your head in the sand. AI is different from anything we’ve experienced before.”

If your team doesn’t trust how AI is being used, they’ll resist it, avoid it, or worse, they’ll use it without telling you. For successful AI implementation you need to build buy-in, not fear.

Find out what else our AI survey revealed by accessing the full report.

If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team.

Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.

Ash London on risk, reinvention, and the quiet joy of Lego at her feet

Ash London on risk, reinvention, and the quiet joy of Lego at her feet

Posted October 7, 2025

Ash London has spent much of her adult life with a microphone in front of her. She’s been a TV host, a radio presenter, and more recently, an author. But when you ask her who she really is, she doesn’t reach for her extensive resume. She describes herself on the couch, reading a book while her son builds Lego at her feet.

It’s an answer that might surprise anyone who has only known Ash as the professional she is—a vibrant and confident broadcaster who made her name in the music industry. But, to her, that distinction is important. The spotlight is her work, while the quiet is who she really is. We sat down with Ash on our latest podcast episode to chat about the person behind the many job titles.

Owning the title

When Ash published her debut novel ‘Love on the Air’, she found herself hesitant to claim the word “author”. “I’m just a person who wrote a book,” she says laughing, “But there’s real power in accepting that title.”

This tension between humility and ownership is something many women would recognise and Ash is quick to point out that if one of her girlfriends admitted to feeling unworthy of their achievements, she’d rush to remind them of their brilliance. Yet, like many do, she struggles to extend the same grace inward.

For her, writing became a way of both proving herself wrong and expanding her identity. After more than a decade of radio, it was an entirely different rhythm. Radio is instant gratification: hours of live content each day, with feedback arriving in real time. Whereas writing a book demanded patience and a willingness to sit with uncertainty.

“I didn’t think I had the discipline,” she admits. “But I wanted to prove to myself that I could.”

Perspective in the pause

Ash admits the novel might not have happened without the forced and chosen pause of her maternity leave that coincided with the Covid lockdowns; giving her the first real time and space to reflect on the last decade.

“I had put my whole identity into being a broadcaster,” she says. “Then suddenly, I was at home, and it was quiet. No interviews, no deadlines. I started realising, wow, that was actually a really cool thing I did.”

It’s that reflection on both her career and her identity outside of work that gave Ash the push she needed to write. And when her son proudly pointed out his name in the book’s acknowledgements, she felt the depth of what she had created. “It’s a legacy,” she says. “Something he’ll always know I did.”

The introvert behind the mic

Ash is the first to admit she’s not naturally an extrovert. “People assume I am because of my work, but I recharge at home. The truest version of myself is just reading on the couch while Buddy plays.”

Behind the stage presence, Ash is someone who finds peace in stillness, who carefully guards her energy, and who has learned to protect her sense of calm, especially while juggling the demands of breakfast radio, motherhood, and writing.

Doing the inner work

What Ash is truly passionate about is the less visible work: therapy, spirituality, and self-reflection, speaking passionately about the value of inner growth. During Covid, she began writing and voicing meditations for her radio audience and the feedback was overwhelming, receiving hundreds of messages a day.

“People are yearning for that deeper connection,” she says. “And when I do that kind of work, it’s what I get the most feedback from.”

Lessons from risk

If Ash has a pattern, it’s refusing to let fear of regret dictate her choices. She’s changed careers, moved countries, sold her house, and taken the leap into becoming an author. “I don’t want to look back and wonder” she says.

“We romanticise the idea of what would have happened if we’d chosen differently. But the truth is, you don’t know. You just make the best decision you can with the information you have, and you keep going.”

For Ash, gratitude is the thread that runs through it all; the risks, the lessons, and the ability to reinvent herself when the moment calls for it.

Not just an author, TV host, or radio personality…

If you ask Ash what she wants to be known for, the answer isn’t fame, ratings, or bestsellers. It’s two-fold: being a great mum and helping others better understand themselves. “So many of us never really go on that journey,” she says. “If I can be part of helping someone figure out why they are the way they are, that would mean everything.”

For someone who has built a career out of connection, whether it’s through TV, radio, or her novel, each chapter of her story points to the same belief in the power of curiosity, courage, and gratitude. Ash London’s real legacy is the way she’s chosen to live: leaning into risk, redefining success on her own terms, and reminding us that who we are matters more than what we do.

Want to hear more of Ash’s story? Watch the full podcast episode on our YouTube channel.