Press esc to head back

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Table of Contents

Introduction
Foreword
A Sector Embracing Change
1
2
3
4
AXA Interview
5
Balancing Pace with Prudence
NHS Resolution Interview
Focused on the Right Use Cases
Zego Interview
Humans in the Loop
Tesco Interview
Structuring the Unstructured
Motor Insurers' Bureau Interview
Data Quality and Training
Governance, Ethics and ESG
Final Reflections
Aviva Interview
6
16
14
15
13
12
11
10
9
8
7
 
 

A Thought Leadership Report from the Strategic Advisory Team of CSG a part of DAC Beachcroft

From Automation to Intelligence: The Impact and Potential of AI in the Claims Process

In collaboration with Insurance Day, we conducted a series of qualitative one-to-one interviews with a diverse group of compensator organisations from across the claims market, and then tested the common themes emerging from these interviews with a wider group through an online survey.

Our aims in conducting this research are to assist our clients in understanding emerging themes and the extent to which there are divergences in opinion regarding current and future use cases, and to help inform strategic thinking in the months and years ahead.

What we have produced is a review of the claims market's current and planned use of AI, looking thematically at the impact of new technologies on important stakeholders in the process, such as colleagues and, most importantly, customers.

A number of clear themes have emerged, and a definite sense of optimism, indeed excitement, regarding future use cases, and what AI can do to drive process efficiency, enhance job satisfaction and improve customer journeys.

Introduction

But what does it actually mean for compensators? How is it impacting the insurance claims process, now and in the future? With the rapid rise of technologies, like Generative AI, and now the emerging shift towards Agentic AI, and the continuing pace of change, the claims process could look dramatically different in just five years. The Strategic Advisory team within DAC Beachcroft’s claims division (CSG), which provides insights and advice on a broad range of nascent issues and innovations impacting general insurers, has set about finding out the answers to these questions.

AI is a hot topic, as we all know, but there's a lot of confusion as to what it actually is. In entering any meaningful discourse about AI, it's important to establish common ground so everyone is clear about what is meant.

Although many organisations already use rules-based Robotic Process Automation (RPA) for processes involving repetitive tasks such as data entry and extraction, this is not AI. Instead, what we mean by AI is the ability to identify and learn from patterns in data, allowing systems-driven decision making, with or without a 'human in the loop', in order to solve complex problems, which previously required human thinking.

Everyone is Talking About Artificial Intelligence (AI).

A Number of Common Themes Emerged from the One-to-one Interviews. In particular:

"With the rapid rise of technologies like Generative AI, and now the emerging shift towards Agentic AI, and the continuing pace of change, the claims process could look dramatically different in just five years."

Pete Allchorne

Partner, DACB Strategic Advisory

All interviewees talked about the importance of making sure humans remain in the loop and of perfecting the interplay between the roles of humans and AI. A number underlined the need for humans to retain responsibility for decisions, AI being deployed to assist this decision-making process, by providing support to claims handlers in a number of ways, freeing up their time to focus on the more complex and human elements of their jobs. Most of those interviewed for this study anticipated a future where the claims process would not be fully automated. Nearly all described human interaction as essential, for the simple reason that an insurance claim is always a time of stress and emotion for customers. This is the reason why insurers’ focus for AI investment, and now Generative AI, is for now firmly on back-office functions, working behind the scenes to enable them to service customers better, rather than on the customer-facing part of their business.

2

Training was mentioned by all as another key to the successful integration of AI in the claims process, helping the humans in the process to understand not just what AI can do and how to use it, but crucially its limitations too, particularly regarding its interpretation of data. As a number of interviewees put it, if the data input isn’t perfect (and given the industry is currently relying on ‘legacy data’, this is most of the time), the people working with these outputs need to understand that. A number talked about significant investments their companies have made in establishing ‘Data Academies’, to educate the people in the business about how to interpret the output from Generative AI and what to watch out for in reviewing that output: anticipating assumptions the AI might be making when processing data and identifying patterns and trends; watching out for bias that could creep in to the process because machines don’t understand the nuances that may sit behind some numerical trends in the way that humans can; and being alert to the possibility of machine hallucinations. Everyone talked about the critical role of humans in the claims process, in reviewing and challenging AI output and making sure the conclusions drawn from it make sense in the real world. 

3

Most described the major potential benefit of Generative AI being its ability to ‘structure unstructured data’. As one interviewee said, "the moment you can do this, you are able not only to make significant improvements to existing models, but also create new models which were out of reach before". The three areas of insurance business and claims handling that could benefit most from Generative AI in this way were listed as:

  • Supporting claims handlers’ calls with customersGenerative AI has a lot to offer in terms of creating summaries of case information for our claims handlers at great speed and producing transcripts of calls with customers, freeing them up to focus on the more complex, human and interesting parts of their job.
  • Processing the millions of incoming documents that insurers receive each year, having the ability to extract the most useful and relevant information much more quickly.
  • Verifying images - increasingly, customers’ claims are supported with images of damage, and in these days of ChatGPT image generation, it is easier for the unscrupulous to fabricate these. Generative AI is able to assess images submitted in a case to check where they have come from, whether they are genuine, or whether there is fraud at play

4

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

1

Whilst many insurers are already using ‘traditional’ predictive AI and machine learning to help assess loss more accurately and identify patterns that point to potential exaggeration or fraud, Generative AI is seen as another level again. The insurers we spoke to are all now piloting use cases to assess the value this new iteration of AI could potentially bring in the future.

Strategic Advisory 
CSG – Part of DAC Beachcroft  

This report provides plenty of food for thought. We hope you find it informative, and that it helps to advance the conversation within your own organisation. We extend our sincere thanks to all those who have contributed to this study, be that via interview, completion of the online survey, or by engaging with the outputs

Next page
Back to contents

Most fundamentally, the ability of AI to provide better structure to the wealth of data being generated by the insurance industry was recognised. The dangers of imperfect or historic data creating distortions and bias were also understood, and I was encouraged to learn that significant investments are being made in establishing ‘Data Academies’, to provide training in how to interpret output from Generative AI; what to watch out for in reviewing that output; anticipating assumptions the AI might make when processing data whilst identifying patterns and trends; and watching out for bias that could creep in to the process and knowing how to deal with hallucinations.

Against a backdrop of international regulatory uncertainty, this report represents a major contribution to the ongoing debate as to the role of AI in the claims process. It should be essential reading for anyone serious about managing AI’s risks whilst maximising its enormous potential to provide greater efficiencies in a safe and ethical way. 

Against this backdrop, the authors of this new report have been doing some in-depth research into how a vital part of our service economy, the insurance industry, is adapting to the new ways of working that the use of AI is leading to.

Their findings, based upon in-depth interviews conducted with a number of insurance providers in one of Britain’s most important service sectors, highlight some familiar issues but also point the way towards new ways of working that will make the insurance sector more efficient, more productive and more accurate in assessing risk and preventing fraud. The potential benefits for customers are increasingly clear.

There was unanimous support for human interaction, however, given the often emotional and stressful circumstances in which claims are made. For the present, the focus of AI investment for insurers is firmly on back-office functions, rather than on the customer-facing part of their business. Fully automated claims processes were rarely envisaged by anyone, but the power of AI when it comes to summarising claims calls from customers, processing millions of documents and detecting fake images, videos or audio recordings, was seen as key priorities.

Foreword

This report, prepared by colleagues at DAC Beachcroft, CSG, is fascinating reading for anyone who is interested in the impact of machine learning, not just in the insurance industry, but more widely too. It is a timely contribution, as the United Kingdom faces some key decisions about the regulation of AI use in the public and private sectors. Legislation on AI is yet to emerge, and its likely content and approach remains unclear as the Government continues to consult over the next year or so. 

Public attitudes to AI must be understood by policymakers. Earlier this year, a survey of more than 3,500 UK residents was conducted by the Ada Lovelace and Alan Turing Institutes. Nearly three-quarters said laws and regulations would make them more at ease with the growth of AI technologies, and nearly nine in ten said they believed that the government or regulators should have the power to halt the use of AI products deemed a risk of serious harm to the public. Over 75% said government or independent regulators should oversee AI safety.

None of these results should come as a surprise. The incredible opportunities that AI offers us will only be fully realised if there is public trust. I believe that will come if governments and organisations recognise from the outset the ethical imperatives of transparency, explicability and the ability to challenge automated decisions when designing new machine-based systems.

by the Right Honourable Sir Robert Buckland, KBE, KC

Former Lord Chancellor and Secretary of State for Justice, and now member of DACB's Policy Unit
Photo: Simon Dawson/No 10 Downing Street

"The incredible opportunities that AI offers us will only be fully realised if there is public trust.”

Sir Robert Buckland

KBE, KC

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory
Next page
Back to contents

"It's not about being 'first' – it's about being smart with technology to be customers' 'first choice'."

Waqar Ahmed

Claims Chief Operating Officer, Aviva

Industry-Wide Adoption  Already Underway

Every participant, whether interviewed or surveyed, confirmed that their organisation is already using or actively exploring the use of Generative AI. Many also report existing deployments of traditional AI (rules-based systems), with 71% saying they already use this in some areas, and 65% actively exploring Generative AI's potential.

Half are already testing or have completed pilots with Generative AI. Not a single respondent indicates their organisation has no plans to explore AI. One survey respondent describes the industry’s imperative to embrace AI in the strongest terms, “If you are not using AI, you will be replaced by a company that is using AI”

Have plans to explore AI

Actively
Exploring Gen AI

65% Actively Exploring Gen AI 
7% have plans to explore AI

Across both one-to-one interviews and survey responses, a clear picture emerges: the insurance industry is embracing the potential of Generative AI with enthusiasm, tempered by a thoughtful and measured approach. This is not an industry rushing blindly into the future, it is one recognising both the opportunities and responsibilities that come with technological change.

The following section brings together the key themes from our research. You’ll find excerpts from the interviews within this report, and you can also click through to the full interviews for a deeper dive into each conversation. 

A Sector Embracing Change

With Eyes Wide Open

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

"If you are not using AI you will be replaced by a company that is using AI"

Survey respondent

Some interviewees share their long-standing experience with AI, dating back nearly a decade in some cases. Others are newer, ‘digital first’ brands, for whom unsurprisingly AI and Generative AI technologies are front and centre. Others again, highlight current pilot projects or areas of experimentation.

Aviva was one of the earliest adopters of AI in the industry, starting some 10 years ago. But its Claims Chief Operating Officer, Waqar Ahmed, cautions against investing in technology for technology’s sake, warning against the allure of "shiny new toys". He emphasises the importance of "discipline" in applying the right tool to do the right job, always having the end goal in sight, and improving the customer experience, "There’s no point in using an expensive tool (when ultimately the cost will be passed on to customers), where an inexpensive one could do the job just as well."

Next page
Back to contents

AXA Insurance

Interviews

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Alexandra Price

Senior Programme Manager (Analytics)

"Across the insurance industry, pilots and trials are being run to see what Generative AI has to offer in improving how claims are managed, how decisions are made, and how customers experience the claims journey. At AXA, the integration of both traditional and Generative AI is a clear example of how insurers can evolve strategically while keeping the human touch central."

Building on a Strong Foundation

AXA has been using traditional machine learning (ML) models for several years, with a particular focus on claims management across motor, property, and casualty lines. These models help claims handlers triage cases and recommend 'next best actions', supporting efficient and consistent decision-making from the start of a claim through to resolution. Importantly, this AI is used to augment human expertise, not replace it. We have no automated decision-making within our journey. The handler is always able to review the output and accept or reject those decisions. 

The Generative AI Opportunity

Building on our traditional machine learning capabilities, we are now exploring how Generative AI can add further value, particularly in document analysis. For example, Generative AI can read and interpret medical reports or other claim-related documents to help detect early signs of complexity in a case, so it can be escalated up to our specialist team more speedily.   

Customer Experience: Seamless and Human-Centred

The company’s use of AI is both customer-facing and internal. In digital channels, like our Electronic Notification Of Loss (ENOL) platform, AI helps triage claims and suggests likely outcomes more speedily, such as whether a vehicle is repairable or whether it needs to be taken to salvage. Customers can then choose to proceed digitally or speak to a handler at any point. 

"Behind the scenes, AI models work silently, flagging potentially complex claims or summarising customer calls to save time. But the customer still interacts with a human. Ultimately, it’s the handler speaking with the customer, backed by AI-generated insights that give them more confidence in those interactions."

Listen to Full Interview

"Our handlers now say AI gives them more time to focus on the parts of the job that require their expertise and human skill-sets, with much of the 'boring' admin taken away, so they have more time to focus on handling complex conversations, providing empathetic support and making decisions. All aspects of the job that really benefit from the human touch."

People First:
Earning Buy-in Across the Business

Successfully integrating AI into claims management isn’t just about the technology. A large part of the exercise is about bringing the people in the business with you – classic change management. In the early days of the programme - we are now five years in - and there was understandable hesitancy from staff about what AI might mean for their roles. However, we have been careful to take a collaborative, transparent approach. Training and support are embedded into AI rollouts, and feedback loops ensure ongoing refinement. Crucially, AI projects are developed with the business teams, not handed down to them. 

Real Business Impact

The AI program is delivering results for our business and for our customers on multiple fronts. It is enabling us to make faster decisions - customers benefit from quicker resolutions, such as knowing immediately where their vehicle will be sent for collection, i.e. whether repair or salvage. It is also greatly improving our operational efficiency, as handlers can manage cases more quickly and effectively with AI support. We also see the benefits of enhanced oversight, because AI acts as a second line of defence, spotting things a human might miss. All in all, AI is facilitating better outcomes – decisions are more consistent and accurate when handlers are supported by AI, which ultimately reduces our indemnity costs and shortens claim cycle times for customers.

AI also has an important part to play in risk management. We are exploring what Generative AI can do in terms of analysing complex data sets, e.g. of historic claims and suggesting risk mitigation strategies to help reduce potential future losses.   

It also has a role in fraud detection. AXA uses AI models to assign risk scores to claims, helping specialist teams prioritise investigations. We are also exploring document and voice analysis to detect fraud more effectively.

As for ethics, AXA has built a governance framework to ensure every AI project is developed responsibly. This includes early engagement with data protection and compliance teams, as well as built-in bias testing and monitoring.

A Measured,
Human-centric Future

 While AXA is open to exploring automated decision-making in the future, our current approach is firmly rooted in maintaining a 'human in the loop'. The AI may make a prediction, but the final call remains with the claims handler. There is real value in that human-AI collaboration. We are not building AI in isolation. We are building it with our people, for our people, because we believe that is how you get the best outcomes for customers and for the business.

Next page
Back to contents

Balancing Pace with Prudence

"We have to bring everyone along with us, not just in the need for change, but the appropriate pace of adoption as well."

Andrew Wilkinson

Chief Claims  Officer, MIB

A recurring theme was the challenge of getting the pace of change right. As Waqar Ahmed puts it, “the pace of technological change is moving faster than the pace of comprehension, let alone adoption”

Simon Hammond, of NHS Resolution, talks about the challenge of bringing colleagues with you on the change journey. The key, he believes, is “making sure everyone is on the same page in terms of realistic expectations. Some, of course, may be fearful of the machines taking over from the humans”, he says. “Others, however, will be at the other end of the spectrum, wanting AI immediately and perhaps not appreciating the need for a thoughtfully paced approach and reflection around the guard rails that might be needed, the regulatory issues that sit around it, nor the potential for unintended consequences. There’s a whole range that falls between these two ends of the spectrum… The key is to get everyone to buy into the appropriate pace of change, as well as the change itself.”

When we put this to the wider group, 75% of respondents said their colleagues are either positive or neutral about AI entering their workflows, but the internal appetite for speed is mixed: between 50–60% report hesitancy around trusting outputs or concerns about job displacement, but at the same time, more than 40% of respondents express colleagues' impatience, wanting transformation to happen faster than their organisations are currently planning. 

Despite this, 80% said they don’t believe colleague resistance is a major issue - suggesting any hesitation may stem more from uncertainty than outright opposition; a number of survey respondents explain their organisation is not yet at the point where AI is impacting the real day-to-day life of a claims handler, so for them it is too early to tell. As awareness and use of AI grow, so too will confidence in its role.

Bringing all the people in the business along with you in synch is a significant challenge. How do you achieve this? According to Simon Hammond, “you need to ensure people are kept informed about the timeline and the ‘art of the realistically possible'. It’s about allowing people to understand you can only move at a certain pace – and that moving at that given pace is a critical aspect: bringing people on the journey with you and dispelling myths along the way.”

Waqar Ahmed

Chief Claims Operating Officer, Aviva

“The pace of technological change is moving faster than the pace of comprehension, let alone adoption”.

Waqar Ahmed that agrees success comes from involving frontline colleagues in the design of customer-facing technology. This works not just in terms of the best way to ensure colleagues buy-in to the use of new technologies, but also from the point of view of making sure customers’ needs are the focus of any tech project. He describes how positively colleagues in his claims teams have embraced new technologies and Aviva’s mission to become "a data-driven intelligence-led function, where people are at the heart of what we're doing". 

“There is often an assumption that when you deploy AI, staff perceive it as a threat”, he says. “But we have proven that when it works side by side with claims handlers to augment what they do and assist them in their roles, they are then able to amplify their own human capabilities, such as empathy, which is so crucial to the customer experience. We have seen their engagement with the company increase.” 


Further, Andrew Wilkinson of the Motor Insurers' Bureau believes AI can act as a highly effective trainer for claims handlers, helping people learn about the relevant case law and legal complexities on the job. “DACB’s excellent AI tool for credit hire is a great example of AI at its best”, he says. “Whereas previously a handler's learning was in large part by trial and error over time, this tool helps them give an offer and explains the rationale, so helping them as a virtual colleague/assistant best friend whilst training them at the same time."

He is very upbeat about the opportunities AI presents for his colleagues, “We see the potential for AI in our processes as very positive. We don’t see it as placing jobs for the humans under threat, but instead increasing opportunity for our people, making their jobs more skilled and interesting.”

Ian Kershaw, Vice President of Customer Service, Claims and Fraud at Zego, agrees, “I don't see AI necessarily replacing claims handlers. It will just change their roles. So instead of handling 200 claims each, they’ll proactively be able to handle 800 because they'll be supported by technology that enables them to work faster and do more, and potentially to an even better standard as more parts of the process become automated.”

AXA Insurance’s Alexandra Price says it’s about taking a collaborative, transparent approach. At AXA, she tells us, “training and support are embedded into AI rollouts, and feedback loops ensure ongoing refinement. And crucially, AI projects are developed with the business teams, not handed down to them.” 

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

"If you are not using AI you will be replaced by a company that is using AI"

Survey respondent

"Successfully integrating AI into claims management isn’t just about the technology. A large part of the exercise is about bringing the people in the business with you. It’s important, AI projects are developed with the business teams, not handed down to them.”

Alexandra price

Senior Program Manager (Analytics), AXA

Next page
Back to contents

"There is a distinction between Natural Language Processing (NLP) technology (which has the ability to work with unstructured data and produce good, reliable and useful outputs) and true machine learning. NLP is a good starting point and brings many benefits, not least easing the burden on staff and making their jobs easier. We have found NLP extremely useful in supporting our staff in this way, and to garner insights and certainty in our environment, by investigating trends and patterns which we can then feed back into the wider healthcare community, with a view to delivering better clinical services to the public."

NHS Resolution

Interviews

When considering the appropriate level and type of technology investment for our organisation and what Generative AI has to offer, the key for us, as with all advancements in technology, is not just to look at the benefit it will deliver for NHS Resolution, but also the wider system we operate in. We are in a slightly unique space because we operate within the health system and are also juxtaposed with the justice system. We are looking for advancements that will give us greater visibility of the ‘concerned’ space, i.e., help us identify where something has gone wrong to a significant degree, which we can then investigate in greater detail and in a wider context, giving us valuable insights to share with other parts of the healthcare system. So, when it comes to deciding on the right technology, for us it is not just about considering what gains we can make for ourselves operationally (saving operating costs, improving consistency and fairness in our decision-making, with less resource), but more about whether or not it will benefit us in what we can deliver back to the health service, both in relation to policy development and also in respect of safer clinical care. But that’s maybe where we come in with a unique perspective, because we are not profit making. 

 Where we are right now is in providing a more efficient system with the integration of some aspects of ML, but this in its infancy. Where we are moving to is the integration of true AI, to see what we can produce from our data that will help the wider system learn from what we see. We are currently updating our IT architecture so we can integrate AI for the benefits of both our internal processes and to provide external insights.

For want of a better phrase, you need to reassure them that you are not building a ‘robot army’, with the end goal that all the decisions across the organisation will be dealt with by AI, rather than human interaction. Others, however, will be at the other end of the spectrum, wanting AI immediately and perhaps not appreciating the need for a thoughtfully paced approach and reflection around the guard rails that might be needed, the regulatory issues that sit around it, or the potential for unintended consequences. There’s a whole range that falls between these two ends of the spectrum.

The AI has got to interact successfully with the organisation you are working with and its people, because it can drive so many benefits if used in an appropriate way, so you need to make sure the people in the business who will be using it and benefitting from its output, understand how it works and how it is to be used. It is about educating people so they understand the benefits to their own roles.

This risk of algorithmic bias is one we talk about a lot and requires time for deep reflection. The issue of machines making assumptions on statistical evidence, even when data sets are strong, because the AI can’t understand the subtleties that lie behind the statistics. It comes back to how you see the future of AI in the decision-making processes in your organisation. These are conversations we are having continually. Would we ever get to a point where the machine is telling us everything and making decisions? The risks in this are far too huge. At NHS Resolution, we are dealing with a very, very sensitive area of claims management – fatalities of people of all ages, some of the most sensitive health issues that occur in the population and some of the most severe injuries people can have, such as cerebral palsy and birth injuries, where people have life-long impacts. So, for us, we accept there will always be a requirement for an element of human decision-making in all that we do.

Whether integrating NLP or machine learning (I see the two as quite distinct), you need to have not just the right technology platform to support it, but the right data platform as well. From our discussions with other indemnifiers, we know that everyone is facing the same challenges around this, particularly around the issue of ‘legacy data’. Tech suppliers talk emphatically about the ‘Holy Grail’ of a complete and perfect data set, this being the only true way to be sure of reliable and consistent output, but most readily acknowledge the restrictions in data. More on this later…

"Another challenge is, of course, the people element, how staff and colleagues are responding to this drive to bring in AI. There is a lot of excitement about it in our organisation and what it can do, which is positive, but a challenge is to make sure everyone is on the same page in terms of realistic expectations. Some, of course, may be fearful of the machines taking over from the humans."

Where we see the benefits of bringing true Gen AI into our systems, is in using it to learn from our past experience in order to inform our staff more accurately about potential outcomes that may occur, and help with risk management in terms of flagging where the risks to the health service may lie, so we can then work with other health partners in avoiding those risks and seeing less harm in the overall system. There are probably multiple other uses of Gen AI for the future that we see that will bring a variety of benefits. For example, assisting with our pricing models and our actuarial forecasting in relation to our long-term liabilities.

We have already launched the first iteration of our new case management system in parts of our business, and the area covering claims, the largest part of our business, is due to go live with a new case management system in the next two to three months. So this is very much the here and now for us!  This has been a couple of years in development, as you can imagine.  What it will provide us with is the platform for a true AI environment that we can actually start to operate with.

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Simon Hammond

Director of Claims Management

There are many challenges, like everyone else in this space, and as referenced above, our biggest concerns are around the availability of reliable data, particularly the quality of data from legacy systems. We live in the real world, so that the Holy Grail of a complete and perfect data set can only ever be an aspiration. Indeed, this is why we have seen the rise of ‘data scientists’. We use data scientists, and there is a lot they can do, but there are natural limitations because at the end of the day, they are handling historic data sets, and therefore the data does not hold the level of consistency or granularity that allows correlations to be drawn. This has the potential to become a major issue when you start to apply machine learning.

You need to have the foundation for the right data platform in place, and also the right data sets, for AI to deliver its promised benefits and produce appropriate results that are accurate and can be relied upon. Conversely, if the data is flawed, and this is what I hear quite regularly from the supplier community, then the potential is that people could build AI systems that produce results from pseudo data or from a small sample that is not necessarily representative, which then becomes a challenge when applied to a wider data set. You may then have gaps and therefore cannot produce the same or similar results to prove the output is repeatable and reliable.

"The key is to get everyone to buy into the appropriate pace of change, as well as the change itself. To do this, you need to ensure people are kept informed about the timeline and the ‘art of the realistically possible’. It’s about allowing people to understand you can only move at a certain pace and that moving at that given pace is a critical aspect, bringing people on the journey with you and dispelling myths along the way.

Of course, these characteristics are common to any change initiative. They are exacerbated when external pressures are at play, for example, the Government wanting all its agencies to invest heavily in AI at the moment.  Then people see technology transformation as something that has to be done, rather than a nice-to-have investment opportunity for the business."

In essence, this is about the risk of unintended consequences. It also applies to our work, providing insights for our external partners in the wider healthcare system. In looking to derive benefits for our members by identifying the sort of harm that has occurred and quantifying the risk, we need to be very careful in understanding these unintended consequences that could tarnish the information we are sharing. For us in this space, our ambition for AI is about looking at how it can support our staff, both in making decisions and in helping the health system learn, as opposed to AI doing this in its entirety on its own.

"There is a lot of discussion at the moment within the wider AI space about how it is going to be regulated. I think the risk is that the legislative framework will always trail invention and innovations in the tech space. We have seen this historically. I believe the key is for an organisation to understand and set its own risk boundaries, to remain within these regulatory frameworks, and adapt as the law evolves. There are likely challenges coming down the line, and across industries, in relation to how the Data Protection Act interacts with the potential of AI and the ingestion of different data models. I’m talking here about the wider environment, not just at the organisational level but maybe even broader."

Another key element is having a clear data strategy, not just for operational efficacy but for regulatory compliance. You need to be ethical in how you go about collecting, storing and using data, and how you intend to utilise the outputs for any models you are building, whether for financial provisioning, decisions in relation to claims management, or for delivering those insights to the wider health community. Whatever those models are designed to do, you need to have a strategy in place to ensure the models themselves and the way you are using the data are appropriately assessed on a regular basis, to ensure that bias isn’t creeping in and that they are producing reliable results consistently. 

In addition, I think there will no doubt be frameworks which will be brought in to regulate how organisations can actually use AI in certain situations for decision-making processes. I think it has to be down to the individual organisation to ensure it sets its own risk appetite accordingly against those regulatory frameworks. This is a recurring conversation internally at NHS Resolution: how our risk appetite fits with the wider technological advancements that might be coming into our environment now and in the future. It’s a double-edged sword; on the one hand, we want to stay well within the regulatory confinements and ensure that the way we use technology is both ethical and on the right side of regulation. On the other hand, we are balancing our risk appetite statement with really wanting to use the new technologies positively and deriving the benefits from them. This is a difficult line for any organisation to set, especially when technology (especially in the Gen AI space) is moving at such a dramatic pace. All this needs to be reviewed continually. How regularly should organisations conduct these reviews? It is probably too simplistic to put a timeline against it, but if you did only an annual review you would soon find yourself out of date. You have to set it against what your ambitions are and what your investment strategy around your tech future looks like, and also against your operational processes, because every time you introduce new aspects of tech your operational processes are going to change, and therefore your risks may change in either direction (less/more). They may improve because you are safer, for example, advancement of some tech may make fraud detection and prevention easier and actually your risks decrease. But, on the opposite side of that same coin is the fact that by ingesting more tech-based decision making, this may present greater risks, such as biases being present. For example, if the pre-event detector mechanisms you choose to adopt end up identifying the wrong categories of individuals for fraud investigations, this could lead to reputational damage, added operational cost and most importantly, lead to a delayed claims process for genuine claimants. So, the nature of risk is going to change depending on your ingestion of tech within your organisation, and possibly improving one risk, but at the very same time heightening another. 

You can see exactly why decisions around the ingestion of Generative AI cannot be rushed!

Next page
Back to contents

Focused on the Right Use Cases

The real opportunity for AI here lies not in replacing this human connection, but in enhancing it. By removing repetitive administrative tasks, summarising case files, transcribing calls, and processing vast quantities of incoming documents, AI gives claims handlers more space to focus on what really matters: the customer and their journey.  


As one survey respondent says, “If we get it right and create the capacity, it should make colleagues’ roles richer, more meaningful by removing the more repetitive/mundane tasks, allowing them to focus on value-added activities.”  Another points out that, “Historically, automation has been about removing non-value-add work. AI has the possibility to assist with actual value-add work to really support claims handlers in their jobs and finally offer a benefit to our customers, who are what really matters.”

While cost savings are certainly a driver (cited by 80% of respondents), the clearest shared motivation across the industry is to free up colleagues to focus on more meaningful, human aspects of the job, with 100% of participants identifying this as a key benefit. Interviewees talk frequently about this, across all sectors of the insurance industry: that a customer making a claim will be experiencing high personal stress, and in these moments the claims handler’s role becomes far more than transactional - it becomes deeply human. Empathy and clear communication are critical.

“I believe there’s a trap with technology: a temptation to rush to market with a shiny new toy. We must remember to put clients front and centre. We need to apply discipline in applying the right tool to do the right job, always having the end goal in sight: improving the customer experience”.

Andrew Wilkinson agrees, “AI can do a lot to assist claims handlers, collating information for example, about the claimant’s medical records, the events surrounding the damage, and presenting summaries to bring handlers up to speed more quickly and then putting documents together for experts or partners to prepare them for negotiating settlements, even perhaps using estimating assistance tools.” 

Waqar Ahmed illustrates the point very elegantly with a picture, “A customer phones their insurer to discuss their claim. The reality is that the claims handler will not be familiar with the particular details of their claim when they call in, so the first thing they’re likely to say is, ‘Can I put you on hold? ‘I'll take you through your security credentials and then let me just get the update to find out what's going on.’ That’s not a brilliant experience for a customer, as we all know from our own experience of dealing with mobile providers, utility companies, banks and the like. It’s a fact of modern life. But we saw that as a problem to solve in the customer’s claims journey, and we thought, what if technology and data science can help us solve this?"  

"So, whereas previously our handlers would be going through the notes on the claims systems notes (which typically were not laid out for the handler in a user-friendly way), what we did instead was to pinpoint the key components of a claim that people want to know when they call to discuss it, and then make sure the system displays this for the handler in a far more user-friendly form. We designed this in conjunction with our frontline operations teams who are closest to the customers and understand their needs, and were able to design a product that had real value for our handlers in their ease of being able to understand what was happening in a claim – and real value for the customer in terms of getting answers to their questions much more quickly and not being placed on hold for so long. A win-win!”

Waqar Ahmed

Chief Claims Operating Officer, Aviva

 

"There is often an assumption that when you deploy AI, staff perceive it as a threat. But we have proven that when it works side by side with claims handlers to augment what they do and assist them in their roles, they are then able to amplify their own human capabilities, such as empathy, which is so crucial to the customer experience. I believe the use of technology to support claims handlers and augment their capabilities will soon become hygiene factors when recruiting and also retaining staff.”

Waqar Ahmed

However, Ian Kershaw describes how Zego’s experience shows generative-AI powered apps and chatbots can take automation of the customer interface to another level and achieve better customer satisfaction ratings than human interactions. "Customers still always have the option to switch to a human agent, but as we’ve developed and improved the live chat experience, we are seeing a huge reduction in the number of calls coming in to our human agents, because the chatbot is able to answer their questions completely. We see this as the proof in the pudding. Our aim is, by the end of the year, to reach 85-90% of our live chat interactions to be fully automated, by customer choice."

Another significant advantage the industry expects from Generative AI is its potential to improve the accuracy of loss assessments. As Simon Hammond points out, “There are likely many other future applications of Generative AI that could deliver a wide range of benefits - for example, enhancing our pricing models and actuarial forecasting related to long-term liabilities.” Indeed, 57% of respondents in our online survey highlighted this as a key motivation for exploring Generative AI. Additionally, 64% of respondents highlight the ability to improve consistency in decision-making as one of the main value-adds. Andrew Wilkinson says he sees AI’s ability to assist claims handlers in the way described above as one of the keys to improving consistency in the MIB’s decisions and assessing loss more accurately. 

However, there remains industry-wide caution regarding the degree to which machines should independently make decisions, rather than serve as supportive tools in the decision-making process.

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory
Next page
Back to contents

Claims Chief Operating  Officer, Aviva

Zego

Interviews

A new feature we have brought in through AI is a multi-language tool, as so many of our customers are not native English speakers. This will pick up straight away if a customer is using English as a second language, and then ask which language they would prefer to speak to us in. That makes an amazing difference for the customer. 

When a customer wants to contact us about a claim, because they do this through the app, we already know who they are, what their policy number is, what their excess is, and all the other terms of their policy, also what car they drive and where they live. So they don’t need to feed any of this detail into the system when they start a claim. All they have to do is give us the details of the incident, and our system will match everything up as we kick off the claim.

"The model has proved so popular and successful that since the beginning of this year we have started to expand out of our commercial driver niche to provide personal car insurance too. Our focus on telematics, and pricing centred around that, means we can be particularly attractive to young drivers, also those with convictions, but who are still good drivers." 

Another area where Gen AI is assisting us hugely is in fraud detection. These days, it is very easy for fraudsters to ask ChatGPT or one of the other models to fabricate invoices, engineer reports, or even fake photos of their vehicles in a crash situation. But luckily, we've got some technology that can tell us very quickly whether this information has been generated by ChatGPT. Fraudsters are evolving very fast, so it's important that we evolve equally as fast and stay ahead!


It’s just phenomenal how fast the world is changing right now and the way the world is shifting, particularly around Gen AI. As our CEO and founder puts it, unless you are really pushing forward quite aggressively with AI, you are going to get left behind. But, as we do this, we are continually thinking from the point of view of our customers and what the value-adds will be to them, just as much as what that impact is on Zego.

Listen to Full Interview

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

"As a very digitally-driven brand in the marketplace, as you can imagine, AI and Gen AI are core to how we are developing our business. The biggest area we have been focusing on in terms of integrating AI is the customer service side, with the primary method our customers use to communicate with us being app-based."

Our main app is called ‘Sense’ and is driven by telematics, so we can offer better prices to our customers based on their personal driving performance. We see this as putting the power back into our customers' hands: if they drive well, they get a better price. We also reward customers for better driving, for example, offering vouchers or potentially discounts on renewals, etc.


Obviously, telematics creates a huge amount of data. We review this to ascertain how our customers drive, where they drive, the time of day or night they are driving, and so forth. This gives us very valuable information about the sort of risk factors that can lead to accidents, which gives us a significant advantage in the market.

We have put a lot of time and other resources into developing our live chat customer interface. We all know how poor the customer experience of using live chat systems can be. Indeed, I’m sure everyone reading this report will have had bad experiences themselves. So, perhaps it is not surprising that some companies are actually stepping back from some of these bot flows. But, we’ve taken a different approach, taking time to understand and address its shortcomings, invest in it and improve it, and we continue to work on improvements every single day.

We were in that very different place maybe four or five months ago. We had a very rudimentary live chat flow, like a lot of companies still have today, where the customers are just given a long list of options to choose from, to categorise their question. And, if their query doesn’t fit neatly into one of these categories, it can be immensely frustrating for them. They can get tangled up in what I call the ‘death spiral’ of just not being able to get the bot to understand their problem. They then just want to speak to a live human who can give them the answer. So, what we wanted to do was create a human-like experience on the live chat, or an even better-than-human experience, and Gen AI has enabled us to do this.  Rather than being offered a tick-box list to categorise their question, customers can now simply have a free-flowing conversation with the bot. It’s incredible how much this has improved our customer satisfaction levels. In fact, when they're dealing with the bot now, customer satisfaction is higher than when they deal with human agents!

We are really pleased with where we have got to, even in terms of tone of voice and empathy. To give an example, a customer the other day asked the question on our live chat about why his policy had been cancelled, and he mentioned he’d been out of the country and that his mother had died. The chatbot response was incredibly empathetic, ‘I'm really, really sorry to hear that. I completely understand the sort of turmoil you must be in...” 

In terms of where we place humans in the loop, of course, we monitor and quality assess chatbot interactions. They are all stored and entirely auditable. We do this in exactly the same way any insurer does with its claims handlers. Also, we don’t yet leave any decision-making to the machines. Our bot stops short of making decisions. At the moment, from a claims perspective, its role is limited to data capture. So it's taking information from the customer, that information is fed into our system automatically, but then it's a human making the decisions: is the customer at fault? Is the customer not at fault? Where does the customer need to be? What supply function does that customer need? Where does it need to be repaired?  At the moment, that's all with a human. I don't see AI necessarily replacing claims handlers. It will just change their roles. So, instead of handling 200 claims each, they’ll be able to proactively handle 800 because they'll be supported by technology that enables them to work faster and do more, and potentially to an even better standard as more parts of the process become automated.

"Do we ever envisage a future where the decisions could be fully automated? Absolutely. But, for end-to-end claims, I think we're some way off that at the moment."

Next page
Back to contents

Ian Kershaw

VP of Customer Service, Claims and Fraud

"Because of this investment, we are now finding that the chatbot can answer 55-60% of customers without any human interaction. Customers still always have the option to switch to a human agent, but as we’ve developed and improved the live chat experience, we are seeing a huge reduction in the number of calls coming in to our human agents, because the chatbot is able to answer their questions completely. We see this as the proof in the pudding. Our aim is, by the end of the year to reach 85-90% of our live chat interactions to be fully automated, by customer choice."

Humans in the Loop: 

Non-negotiable (for now) 

Simon Hammond emphasises the need for humans always to remain in the loop at NHS Resolution because of the particularly sensitive nature of the claims and information dealt with: “Would we ever get to a point where the machine is telling us everything and making decisions? The risks in this are far too huge. At NHS Resolution, we are dealing with a very, very sensitive area of claims management. So, for us, we accept there will always be a requirement for an element of human decision making in all that we do.” 

Alexandra Price adds: “The traditional machine learning models we have been using for several years help claims handlers triage cases and recommend 'next best actions', but importantly, this AI is used to augment human expertise—not replace it. We have no automated decision-making within our journey. The handler is always able to review the output and accept or reject those decisions."

Andrew Wilkinson talks about a moral duty for humans not to abrogate decisions and keep responsibility for them: “We see the role of AI in assisting decision-making, but not making decisions by itself. There will always be the need for humans to take responsibility for decisions."  

Most believe that the near-term focus for AI should remain on back-office support; however, some are using it for customer-facing tasks in a controlled, hybrid way, maximising the benefits of AI-human collaboration. Alexandra Price describes AXA’s use of AI for both customer-facing and internal tasks, “In digital channels, like our Electronic Notification Of Loss platform, AI helps triage claims and suggest likely outcomes more speedily, such as whether a vehicle is repairable or whether it needs to be taken to salvage. Customers can then choose to proceed digitally or speak to a handler at any point…”  She adds: “Behind the scenes, AI models work silently, flagging potentially complex claims or summarising customer calls to save time. But the customer still interacts with a human. Ultimately, it’s the handler speaking with the customer, backed by AI-generated insights that give them more confidence in those interactions."

As Ian Kershaw explains, even digital-first brand, Zego, currently stops short of using bots to make decisions, "We don’t yet leave any decision-making to the machines. Our bot stops short of making decisions. At the moment, from a claims perspective, its role is limited to data capture. So, it's taking information from the customer, that information is fed into our system automatically, but then it's a human making the decisions: is the customer at fault? Is the customer not at fault? Where does the customer need to be? What supply function does that customer need? Where does it need to be repaired? At the moment, that's all with a human.”

Julie Plumb of Tesco’s Insurable Risks team believes particularly strongly that human oversight must remain centre stage in decision-making. “We see the role of AI in assisting decision-making, but not making decisions by itself. There will always be the need for humans to take responsibility for decisions. Experienced claims handlers, those with 20 or 30 years of experience, often have an instinct for when something doesn’t feel right, especially in cases of potential fraud or exaggeration. That kind of gut feeling is difficult, if not impossible, for machines to replicate. It’s a critical component of the process that shouldn’t be underestimated.”

However, some think this could shift in the future. Waqar Ahmed explains an alternative view: “Keeping humans in the loop at the moment is imperative, given the fledgling nature of the technology. But, I can see a time in the not-so-distant future where ‘human in the loop’ evolves to ‘human oversight'. I think this is inevitable, because if we go back to our primary focus for technology development and investment, our customers, we have the potential to build products that are ‘always on’ for them. So, irrespective of when they have a claim, they have the ability to interact with us at any time of the night or day, to interact with us so we can help them in their moment of need." But, “We must ensure that when a customer interacts with AI systems, they get the same great customer outcome that they would if they were dealing with a human handler. Wouldn’t that be a great thing for the industry? To transition that great human service to a digital experience that's always on." Ian Kershaw takes a similar stance, “Do we ever envisage a future where the decisions could be fully automated? Absolutely. But, for end-to-end claims, I think we’re some way off that at the moment.”

All our interviewees highlight the importance of maintaining human oversight in the claims process. Many strongly reinforced the need for humans to remain in the driving seat when it comes to decision-making, with AI playing a supportive role. 

Alexandra Price of AXA Insurance explained that whilst AXA is open to exploring automated decision-making in the future, their current approach is firmly rooted in maintaining a ‘human in the loop’.

"We have to bring everyone along with us not just in the need for change, but the appropriate pace of adoption as well."

"Humans are unique and it takes a human to understand that."

Andrew Wilkinson

Chief Claims Officer, MIB

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Andrew Wilkinson

Chief Claims Officer, MIB

There is real value in … human-AI collaboration. We are not building AI in isolation. We are building it with our people, for our people because we believe that is how you get the best outcomes for customers, and for the business."

Alexandra Price

Senior Programme Manager (Analytics), AXA Insurance

Ian Kershaw

Vice President of Customer Service, Claims and Fraud, Zego

“Do we ever envisage a future where the decisions could be fully automated? Absolutely! But for end-to-end claims, I think we’re some way off that at the moment."

"I don't see AI necessarily replacing claims handlers. It will just change their roles. So, instead of handling 200 claims each, they’ll be able to proactively handle 800 because they'll be supported by technology that enables them to work faster and do more, and potentially to an even better standard as more parts of the process become automated.

Ian Kershaw

Vice President of Customer Service, Claims and Fraud, Zego

Next page
Back to contents

Tesco

Interviews

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Julie Plumb

Insurable Risks Manager

"We transitioned all our motor claims handling to DAC Beachcroft around 18 months ago. They now manage claims arising from motor incidents involving all our branded, owned, and lease-hire vehicles across our various fleets. This includes lorries and vans used for general distribution to our stores, our grocery home shopping service, and our maintenance teams."

Each month, we receive management information from DACB, which I review to identify trends, such as whether incidents are more frequent at certain times of day or in specific geographic areas. Insights like these help us take proactive steps to prevent or reduce future incidents. Any technology that can support this analysis is invaluable, as it gives us the tools to understand what’s happening and apply that understanding to our risk management planning. 

We are currently working on integrating data flows from DACB to automatically update our internal systems. Our current process is as follows: when an incident occurs, the driver contacts our accident management company from the scene. They complete a report, the First Notification of Loss, which is then sent electronically to DACB. From there, DACB can open a file and begin the claims process. Given the high volume of claims they handle for us, any steps that can streamline the process and improve efficiency are, of course, welcome.

"It’s essential for us that our suppliers are aligned with our approach to technology, especially since our systems are so interconnected. Our goal is to reach a point where, as soon as a claim is initiated, all relevant data is automatically filed in our internal system without requiring someone to manually extract and transfer it, for instance, from a shared inbox to a specific folder."

Listen to Full Interview

"AI clearly has a role to play in this evolution, and it’s undeniably the direction things are heading. However, I strongly believe that human oversight must remain part of the process. Experienced claims handlers, those with 20 or 30 years of experience, often have an instinct for when something doesn’t feel right, especially in cases of potential fraud or exaggeration. That kind of gut feeling is difficult, if not impossible, for machines to replicate. It’s a critical component of the process that shouldn’t be underestimated.

People in our business, especially within my insurable risk team, are highly adaptable and open to change, innovation, and new ideas that improve how we operate. As a company, we’re already well-versed in adopting new technologies, particularly in logistics, for example, automating processes in our distribution centres to get stock out of warehouses and into shops as efficiently as possible."

Next page
Back to contents

"As I’ve said before, one of the key issues for us is ensuring that any supplier or partner we work with is aligned with our values and approach when it comes to the use of technology."

Structuring the Unstructured

A key theme emerging from both our interviews and the survey data is interest in AI’s growing capability to ‘structure unstructured data’. From image analysis in fraud detection to interpreting text-heavy claim files, Generative AI is opening new possibilities. 65% of survey respondents said a key potential benefit of Generative AI for them was enhancing the ability to make use of large data sets: a heightened ability to analyse data to detect patterns of potential fraud was cited by 50% of survey respondents as a main driver for adopting Generative AI. Other respondents mentioned the potential to use AI data analysis to make improvements to pricing models (43%) and long-term liability forecasting (36%).
 
Alexandra Price explains how AXA approaches the use of AI in fraud detection, “We use AI models to assign risk scores to claims, helping specialist teams prioritise investigations. We are also exploring document and voice analysis to detect fraud more effectively.” A number of interviewees emphasised that with Generative AI’s ability to analyse images, insurers are better equipped to detect fabricated visuals - an emerging concern in the era of synthetic media.

Others pointed to the potential of these tools to deliver insights that minimise or prevent future claims altogether, enhancing not just business performance, but risk management and industry resilience. Alexandra Price sees an important role for AI here, “We are exploring what Generative AI can do in terms of analysing complex datasets, e.g. of historic claims, and suggesting risk mitigation strategies to help reduce potential future losses.” Julie Plumb talks about the management information she receives from her outsourced claims handlers (DACB’s claims entity, CSG) and explains how she reviews this, “to identify trends - such as whether incidents are more frequent at certain times of day or in specific geographic areas. Insights like these help us take proactive steps to prevent or reduce future incidents. Any technology that can support this analysis is invaluable, as it gives us the tools to understand what’s happening and apply that understanding to our risk management planning.”

65%

Enhancing the ability to make use of large data sets.

50%

A heightened ability to analyse data to detect patterns of potential fraud.

43%

Use AI data analysis to make improvements to pricing models.

Key Potential Benefits of Generative AI

“Insights [into trends], such as whether incidents are more frequent at certain times of day or in specific geographic areas, help us take proactive steps to prevent or reduce future incidents. Any technology that can support this analysis is invaluable, as it gives us the tools to understand what’s happening and apply that understanding to our risk management planning.”

And Simon Hammond takes this point further, explaining NHS Resolution’s interest in gathering insights to feed in to the wider healthcare community: “When it comes to deciding on the right technology, for us it is not just about considering what gains we can make for ourselves operationally, (saving operating costs, improving consistency and fairness in our decision-making, with less resources), but more about whether or not it will benefit us in what we can deliver back to the health service both in relations to policy development and also in respect of safer clinical care”.

As a percentage of our survey respondents.

Next page
Back to contents

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

Motor Insurers' Bureau

Interviews

We live in exciting times.

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

The people aspect of change is an essential part of our technology journey. The key to bringing our handlers and the wider business with us is to show that what we are doing and trialling will make their jobs easier and less burdensome. We have all had the experience that the promised benefits from expensive technology investment come to nothing, so tangible results are necessary to show that the benefits are real. We have also all seen and heard the claims by technology suppliers that the shiny new system will mean headcount can be reduced - but this is, in my experience never true! Rather, good technology will change the way people go about their jobs.

There is a degree of excitement in the business around AI – new toys, new tools, and this has to be managed too, as we want the pace of AI adoption to be appropriate for the business. There’s a spectrum of course, with reticence and cynicism at one end, from people who have too often seen new technologies fail to live up to their promise; and with progressives at the other end of the scale, impatient to delegate their drudge work to machines so they can focus on the more interesting investigation and negotiation aspects of their role. As leaders, we have to bring everyone along with us, not just in the need for change, but the appropriate pace of adoption as well.

We are a small organisation relative to others in the insurance space, which means on the one hand we can be fast adopters, but budgets can be an issue. We are, of course, funded by levies from all motor insurers and, ultimately, from their customers’ premiums, so we have to be mindful of this when considering expensive outlay on bespoke technologies. However, an option for us is to piggyback on systems developed and made available to the market by insurers, taking their AI systems’ capabilities and adapting them for our own purposes. This is something we are exploring.

"AI also has a role to play in fraud detection, particularly in identifying exaggerated or false claims by looking at trends and patterns that trigger the need for a more detailed investigation. But, given the nature of our work with investigation at its core at the get-go, we are well set up for this. We see the potential for AI in our processes as very positive. We don’t see it as placing jobs for the humans under threat, but instead increasing opportunity for our people, making their jobs more skilled and interesting."

Listen to Full Interview
Next page
Back to contents

Andrew Wilkinson

Chief Claims Officer

The Motor Insurers' Bureau (MIB)’s founding principle is that no one injured by an uninsured driver, or hit-and-run incident, should be left without the support they deserve. Our long-term goal is to eradicate uninsured driving completely, and to achieve this we know we must find ways to go further and faster. So, of course, we are interested in exploring how Generative AI can help us. Our mission is not just about handling individual claims, but serving the wider community and making our roads safer. 

We look at claims in the context of a value chain, before a claim arises, i.e. identifying geographical hotspots for uninsured driver incidents and hit-and-runs, when an incident and a claim occurs, instructing suppliers and partners (including lawyers), how we approach negotiating settlements, how we handle data and management information, how we manage workflows and time, and how we analyse data and draw conclusions. We certainly see a role for AI in the pre-claims process, working with the police and the DVLA, for example, using cameras to predict hotspots. Also in the investigation of claims, digging to find insurers or the identity of drivers using ‘connected vehicle’ technology, such as getting information where appropriate from satnavs, phones and other internet connectivity to pinpoint who was in a car at a particular time and place, but this is several years down the line. We see the role of AI in assisting decision-making, but not making decisions by itself. There will always be the need for humans to take responsibility for decisions, but AI can do a lot to assist claims handlers. For example, collating information about the claimant’s medical records, the events surrounding the damage, presenting summaries to bring handlers up to speed more quickly and then putting documents together for experts or partners to prepare them for negotiating settlements, even perhaps using estimating assistance tools. We can see a benefit here in improving consistency in our decisions.

We tell our handlers to think of AI as a virtual colleague sitting next to them, or an assistant best friend. It can also be a highly effective trainer, helping people pick up the relevant case law and legal complexities on the job. DACB’s excellent AI tool for credit hire, Nightingale, is a great example of AI at its best: whereas previously a handler's learning was in large part by trial and error over time, this tool helps them give an offer and explains the rationale, so helping them as a virtual colleague/assistant best friend whilst training them at the same time.

Humans are unique, and it takes a human to understand that. For example, our handlers are very much alive to the fact that different people react differently to a traumatic experience, impacting how they present to experts and even the ways their symptoms manifest. But, if AI can remove a big part of our handlers’ admin load, they will then be freed up to spend more time on this human element of their jobs, which could bring significant benefits to their work and to claimants’ experience. Speaking personally, by way of example, I love negotiating settlements, but all the painstaking admin involved in the run-up, not so much!  

Concerns about bias in AI-driven analysis are less of an issue for us compared to policy-writing insurers, because our work is based on factual events rather than drawing conclusions from statistical data and pricing according to the likelihood people will behave in a certain way. But, we are concerned about the handling of personal and sensitive data and what we are inputting into machine learning, which is why any AI we pilot or use is contained in a closed system and is not web-based, and why we are very cautious to conduct any pilots in a safe environment. In any event, the regulator will be involved in how AI is used in our industry, in terms of the customer journey and ensuring correct and appropriate outcomes. It will be interesting to see how the regulatory framework develops.

This is where training becomes essential. Several organisations have responded by creating formal education programmes for colleagues around AI and data, to help them understand AI systems, identify potential flaws or biases and learn how to ask the right questions and interpret AI outputs critically. Some have even established Data Academies. Others take a less structured approach. While approaches vary, the direction is clear: effective AI adoption demands confident, well-informed human operators, and the industry is taking great strides to update the way its professionals are trained, and we can expect to see more advances in education in this important area as time progresses.

“The Holy Grail of a complete and perfect data set can only ever be an aspiration… This has the potential to become a major issue when you start to apply machine learning.”

Data Quality & Training: Foundations for Trust

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

However, the gains that AI offers come with caveats. The biggest challenge cited across the board is data quality. Over 70% of survey respondents flagged legacy systems and imperfect data as a significant risk to the successful implementation of AI.  

“Our biggest concerns are around the availability of reliable data – particularly the quality of data from legacy systems.”

Julie Plumb

Insurable Risk Manager, Tesco

Interviewees repeatedly stressed the need for human oversight and education in reviewing AI output - especially when dealing with messy or incomplete datasets. As Simon Hammond puts it, “We live in the real world, so that Holy Grail of a complete and perfect data set can only ever be an aspiration... This has the potential to become a major issue when you start to apply machine learning.” And, of course, readers of this report will all be aware of the danger of machine hallucinations.

Next page
Back to contents

A System-wide Effort

Respondents and interviewees alike emphasised that meaningful transformation requires collaboration across the insurance ecosystem. It’s not enough for individual organisations to advance in isolation. Suppliers, partners, underwriters, and technology providers all need to be aligned, not just on the tools used, but on governance frameworks, training, and the pace of change itself.

Julie Plumb’s thoughts echoed those of all our respondents  (100% agreed this kind of cross-industry alignment is crucial to success): “One of the key issues for us is ensuring that any supplier or partner we work with is aligned with our values and approach when it comes to the use of technology.”  

A number of survey respondents include customers in this, one making the point: “The extent to which AI can assist is really dictated by wider society and industry uptake. Delivering solutions in isolation poses real risks around acceptance.”

“One of the key issues for us is ensuring that any supplier or partner we work with is aligned with our values and approach when it comes to the use of technology”.

Julie Plumb

Insurable Risk Manager, Tesco

Governance, Ethics & ESG

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

“Ethics has to be the first thought, and remain front and central, when considering any new technology initiative or investment." 

Waqar Ahmed

Claims Chief Operating Officer, Aviva

A final area of widespread concern is governance, particularly in relation to ethics and Environmental, Social and Governance (ESG) compliance. Interviewees raised concerns about how Generative AI could unintentionally reintroduce bias or undermine transparency in decision-making. Transparency, auditability, and clear lines of accountability are increasingly viewed as critical components of responsible AI integration. 

Waqar Ahmed cautions that Generative AI technologies are still very nascent: “They are still emerging and developing. We are only beginning to understand the capabilities and limitations of this technology, and so we need to make sure to put the correct guardrails in place.”  

Simon Hammond talks in detail about the importance of aligning the business’s risk appetite with the wider technological advancements coming down the line. For his organisation, he says, “It’s a double-edged sword: on the one hand… ensuring the way we use technology is both ethical and on the right side of regulation. On the other hand… balancing our risk appetite statement with really wanting to use the new technologies positively and deriving the benefits from it”. He says, "It’s a difficult line for any organisation to set when technology (especially in the Generative AI space) is moving at such a dramatic pace... All this needs to be reviewed continually”. He advises timelines for reviews, “set against what your ambitions are and what your investment strategy around your tech future looks like, and also against your operational processes”.

“I think the risk is that the legislative framework will always trail invention and innovations in the tech space. We have seen this historically. I believe the key is for an organisation to understand and set its own risk boundaries, to remain within these regulatory frameworks, and adapt as the law evolves."

He cautions, “Every time you introduce new aspects of tech, your operational processes are going to change and therefore your risks may change in either direction (less/more)… The nature of risk is going to change depending on your ingestion of tech within your organisation, and possibly improving one risk but at the very same time heightening another."

Alexandra Price describes the governance framework AXA has built to ensure every AI project is developed responsibly. This includes early engagement with data protection and compliance teams, as well as built-in bias testing and monitoring. 

Simon Hammond

Director of Claims Management, NHS Resolution

Waqar Ahmed also talks about Aviva’s belief that ethics should be the first thought, and remain front and central, when considering any new technology initiative or investment: “We involve our Ethics Committee before we do anything in this space, so we think about data protection, the possibility of bias and governance issues before we think about producing any product. Yes, we have the capability to deliver products much faster, but actually, what is more important is that you can be sure you are producing those products in a well-tested, well-understood, and well-governed environment. It may mean your speed to market is a little bit reduced, but you know you’ll be coming to market in a responsible way. It only takes one instance to lose customer trust.”

“It’s a double-edged sword: on the one hand… ensuring the way technology is used is both ethical and on the right side of regulation. On the other hand… balancing risk appetite statements with really wanting to use the new technologies positively and deriving the benefits from them. It’s a difficult line for any organisation to set.”

Simon Hammond

Director of Claims Management, NHS Resolution

Next page
Back to contents

Ethics has to be the first thought, and remain front and central, when considering any new technology initiative or investment. We involve our Ethics Committee before we do anything in this space, so we think about data protection, the possibility of bias and governance issues before we think about producing any product. Yes, we have the capability to deliver products much faster, but actually what is more important is that you can be sure you are producing those products in a well-tested, well-understood, and well-governed environment. It may mean your speed to market is a little bit reduced, but you know you’ll be coming to market in a responsible way. It only takes one instance to lose customer trust.

For now, keeping humans in the loop is imperative, given the fledgling nature of the technology. But, I can see a time in the not-so-distant future where ‘human in the loop’ evolves to ‘human oversight. I think this is inevitable, because if we go back to our primary focus for technology development and investment, our customers, we have the potential to build products that are ‘always on’ for them. So, irrespective of when they have a claim, they have the ability to interact with us at any time of the night or day, to interact with us so we can help them in their moment of need. But, we must ensure that when a customer interacts with AI systems, they get the same great customer outcome that they would if they were dealing with a human handler. Wouldn’t that be a great thing for the industry: to transition that great human service to a digital experience that's always on?

We started a significant claims transformation three years ago, focusing on evolving to a data-driven intelligence-led function, where people are at the heart of what we're doing. As we evolved, we have developed specialist people, capabilities and skills, and we have built a number of machine learning tools and Generative AI tools to augment what they are doing. Now, we have what we call our Voice of Aviva engagement service for staff, which involves a survey that goes twice a year to all of our people who operate in claims, to get a sense of how engaged they are and how they find working at Aviva. What’s interesting is that our engagement scores have tripled over the three-year period of our tech transformation. There is often an assumption that when you deploy AI, staff perceive it as a threat. But we have proven that when it works side by side with claims handlers to augment what they do and assist them in their roles, they are then able to amplify their own human capabilities, such as empathy, which is so crucial to the customer experience.

I also believe that the use of technology to support claims handlers and augment their capabilities will soon become hygiene factors when recruiting and also retaining staff.

"The more in tune you are with the reality of customers’ experience, and what’s important to them, the more you will be able to build products of value. Once you understand this, then you realise that actually, in order to deliver a quality product, you need to involve all the relevant people, for example, the frontline staff closest to the customers, rather than just the traditional project teams."

Aviva

Interviews

Waqar Ahmed

Claims Chief Operating Officer

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

"The pace of technological change is moving faster than the pace of comprehension, let alone adoption, and we need to be disciplined and wise in how we approach it.

Aviva was one of the earliest adopters of AI in the industry, starting some 10 years ago. But it’s not about being ‘first’ – it’s about being smart with technology to be customers’ ‘first choice’."

I believe there’s a trap with technology, a temptation to rush to market with a shiny new toy.  We must remember to put clients front and centre. We need to apply discipline in applying the right tool to do the right job, always having the end goal in sight and improving the customer experience. There’s no point in using an expensive tool (when ultimately the cost will be passed on to customers) where an inexpensive one could do the job just as well. It could be Gen AI, it could be machine learning, it could be RPA (rules-based Robotic Process Automation) or any other form of tech. We are merely utilising those forms of technology to build the product that will improve the customer experience. It’s the product and the value it adds for the customer that’s important, not the technology itself. I use an analogy where Gen AI for me is a wrench, machine learning is a screwdriver, and RPA is a hammer. Often, people focus on the tools, but the important point is what you are using those tools to create and to fix customers' problems.

"We know what our customers want:

Speed, Accuracy and Transparency."

We all recognise the importance of these points from our own retail experience - what those familiar retailers that you go back to time and time again have in common, despite the quality of their product, is that the service they provide is underpinned by these three things. When someone has a claim, it’s usually stressful for them, and what they will do because they've purchased our product is turn to us for support, and it is in that moment of truth that we are measured.

Imagine this scenario: a customer phones their insurer to discuss their claim. The reality is that the claims handler will not be familiar with the particular details of their claim when they call in, so the first thing they’re likely to say is "Can I put you on hold? ‘I'll take you through your security credentials and then let me just get the update to find out what's going on". That’s not a brilliant experience for a customer, as we all know from our own experience of dealing with mobile providers, utility companies, banks and the like. It’s a fact of modern life. But, we saw that as a problem to solve in the customer’s claims journey, and we thought, what if technology and data science can help us solve this?

So, whereas previously our handlers would be going through the notes on the claims systems (which typically were not laid out for the handler in a user-friendly way) what we did instead was to pinpoint the key components of a claim that people want to know when they call to discuss it, and then make sure the system displays this for the handler in a far more user-friendly form. We designed this in conjunction with our frontline operations teams who are closest to the customers and understand their needs, and were able to design a product that had real value for our handlers in their ease of being able to understand what was happening in a claim – and real value for the customer in terms of getting answers to their questions much more quickly and not being placed on hold for so long. A win-win!

"Generative AI technologies are still very nascent: they are still emerging and developing. We are only beginning to understand the capabilities and limitations of this technology, and so we need to make sure to put the correct guardrails in place."

Listen to Full Interview
Next page
Back to contents

Given the continuing pace of change in the technology space, the Strategic Advisory Team intends to keep a watching brief. Stay tuned for information about our future reports, events and other resources on this topic. Want to stay informed and connect with industry peers as you navigate the implications for your business? 

Peter Allchorne

Partner

Strategic Advisory

T: +44 (0) 117 918 2275

E: pallchorne@dacbeachcroft.com

Next Steps

Back to contents

Get in Touch

Contact Peter
Contact Joanna

Our Contributors

Alexandra Price

Senior Programme Manager (Analytics)

AXA Insurance

Simon Hammond

Director of Claims Management

NHS Resolution

Ian Kershaw

VP of Customer Service, Claims and Fraud

Zego

Julie Plumb

Insurable Risks Manager

Tesco

Waqar Ahmed

Claims Chief Operating Officer

Aviva

Peter Allchorne

Partner
Strategic Advisory

DAC Beachcroft

Craig Dickson

Chief Executive Officer
CSG

DAC Beachcroft

Joanna Folan

Legal Director

Strategic Advisory

T: +44 (0) 207 894 6350
E: jfolan@dacbeachcroft.com

Michael McCabe

Solicitor

Strategic Advisory

T: +44 (0) 207 894 6315
E: mmccabe@dacbeachcroft.com

Contact Michael

Andrew Wilkinson

Chief Claims Officer

Motor Insurer's Bureau

Gabriel Biangolino

Value Creation, Head of Strategy

Admiral Group

Right Honourable Sir Robert Buckland

Former Lord Chancellor and Secretary of State for Justice, and now member of DACB's Policy Unit

Final Reflections

This report shows an industry on the move: optimistic but thoughtful, ambitious but grounded. The story of AI in the insurance claims process has only just begun, and as understanding deepens, both the technology and those who work with it will continue to evolve.

The opportunities are immense for those who get it right, as eloquently summarised by Alexandra Price in this final word:

2025 DAC Beachcroft LLP. All rights reserved.

Careers
Accessibility
Emergencies
Modern Slavery Act Statement
Complaints / Our Policies
Legal & Regulatory

"Our AI program is delivering results for our business, and for our customers, on multiple fronts: it is enabling us to make faster decisions - customers benefit from quicker resolutions, such as knowing immediately where their vehicle will be sent for collection, i.e. whether repair or salvage; it is also greatly improving our operational efficiency, as handlers can manage cases more quickly and effectively with AI support; we also see the benefits of enhanced oversight, because AI acts as a second line of defence, spotting things a human might miss. All in all, AI is facilitating better outcomes – decisions are more consistent and accurate when handlers are supported by AI, which ultimately reduces our indemnity costs and shortens claim cycle times for customers.

Alexandra Price

Senior Program Manager (Analytics), AXA