Insurance and AI: Why humans still need to be in the loop
Insurance and AI: Why humans still need to be in the loop
The insurance industry has long been a pioneer in automation. Fraud detection, claims processing, and risk modelling all lend themselves to technology, and AI is simply the next layer. However, it brings with it new complexities, risks, and opportunities.
In our recent AI survey, 40.3% of financial services respondents (including insurance) said their organisation is still in the experimental or pilot stage of AI adoption. And while early wins are clear, there’s a universal truth in insurance: you can’t take humans out of the loop entirely.
From automation to AI: An evolution, not a leap
JP Browne, Practice Manager from Talent Auckland says, “Insurance has been using automation for years and AI just extends what’s possible, from approving low-value claims instantly to extracting insight from thousands of documents.”
Examples of early AI adoption in insurance include:
- Automating claims approvals for low-value, low-risk cases
- Using AI to scan and summarise large volumes of customer documents
- Generating insights from call centre transcripts to improve service quality
These targeted use cases reduce cost, save time, and free human experts for more complex work.
Why human oversight still matters
AI may be fast, but it can’t (yet) replace human judgement in high-stakes decisions.
“If somebody’s house is on fire, you can’t let a bot decide whether to let the claim go through,” says JP.
In regulated industries like insurance, compliance, ethics, and customer trust demand human sign-off for:
- Large or complex claims
- Disputed cases
- Situations with incomplete or ambiguous data
- Potential fraud indicators
The security and compliance factor
As part of the broader financial services sector, insurance organisations share similar AI adoption challenges, particularly around security and compliance.
Our survey findings show:
- 2% said security or compliance concerns are their biggest barrier to regular AI use
- 3% said their organisation has restrictions or policies in place limiting the use of external AI tools
- 9% are exploring secure, fit-for-purpose AI solutions
- 11% have developed or implemented their own secure, in-house AI capability
Some insurers are even moving back to on prem to maintain tighter control of sensitive data and meet stringent regulatory requirements.
The data quality challenge
Insurance leaders know that AI is only as good as the data it’s fed. “We’re seeing a big rise in demand for data engineers and analysts, because poor-quality data kills AI performance,” observes JP.
This focus on data readiness is driving workforce changes in:
- Systems engineering
- Data engineering and analytics
- Data governance and compliance roles
What insurance leaders should so next
- Identify low-risk AI use cases that deliver measurable ROI
- Maintain human oversight for complex or high-value claims
- Strengthen data governance and quality
- Build secure infrastructure for AI deployment
- Create clear policy frameworks for AI use across teams
AI can process claims in seconds and surface insights no human could spot, but it can’t replace the trust built through human expertise. In insurance, the leaders won’t be those who hand decisions over to machines, but those who combine AI’s speed with human empathy, ethics, and accountability. The winning formula? Let AI handle the heavy lifting, while people make the calls that truly matter.
Want to find out what else our AI survey revealed? Access the full report.
If you’re looking to build internal AI capability or make your first AI hire, get in touch with our team. Or if your business is ready to kick off a data, AI or innovation project, drop a message to Jack’s team at Avec.