2025 DAC Beachcroft LLP. All rights reserved.
All interviewees talked about the importance of making sure humans remain in the loop and of perfecting the interplay between the roles of humans and AI. A number underlined the need for humans to retain responsibility for decisions, AI being deployed to assist this decision-making process, by providing support to claims handlers in a number of ways, freeing up their time to focus on the more complex and human elements of their jobs. Most of those interviewed for this study anticipated a future where the claims process would not be fully automated. Nearly all described human interaction as essential, for the simple reason that an insurance claim is always a time of stress and emotion for customers. This is the reason why insurers’ focus for AI investment, and now Generative AI, is for now firmly on back-office functions, working behind the scenes to enable them to service customers better, rather than on the customer-facing part of their business.
Training was mentioned by all as another key to the successful integration of AI in the claims process, helping the humans in the process to understand not just what AI can do and how to use it, but crucially its limitations too, particularly regarding its interpretation of data. As a number of interviewees put it, if the data input isn’t perfect (and given the industry is currently relying on ‘legacy data’, this is most of the time), the people working with these outputs need to understand that. A number talked about significant investments their companies have made in establishing ‘Data Academies’, to educate the people in the business about how to interpret the output from Generative AI and what to watch out for in reviewing that output: anticipating assumptions the AI might be making when processing data and identifying patterns and trends; watching out for bias that could creep in to the process because machines don’t understand the nuances that may sit behind some numerical trends in the way that humans can; and being alert to the possibility of machine hallucinations. Everyone talked about the critical role of humans in the claims process, in reviewing and challenging AI output and making sure the conclusions drawn from it make sense in the real world.
Most described the major potential benefit of Generative AI being its ability to ‘structure unstructured data’. As one interviewee said, "the moment you can do this, you are able not only to make significant improvements to existing models, but also create new models which were out of reach before". The three areas of insurance business and claims handling that could benefit most from Generative AI in this way were listed as:
2025 DAC Beachcroft LLP. All rights reserved.
2025 DAC Beachcroft LLP. All rights reserved.
Half are already testing or have completed pilots with Generative AI. Not a single respondent indicates their organisation has no plans to explore AI. One survey respondent describes the industry’s imperative to embrace AI in the strongest terms, “If you are not using AI, you will be replaced by a company that is using AI”.
The following section brings together the key themes from our research. You’ll find excerpts from the interviews within this report, and you can also click through to the full interviews for a deeper dive into each conversation.
2025 DAC Beachcroft LLP. All rights reserved.
Aviva was one of the earliest adopters of AI in the industry, starting some 10 years ago. But its Claims Chief Operating Officer, Waqar Ahmed, cautions against investing in technology for technology’s sake, warning against the allure of "shiny new toys". He emphasises the importance of "discipline" in applying the right tool to do the right job, always having the end goal in sight, and improving the customer experience, "There’s no point in using an expensive tool (when ultimately the cost will be passed on to customers), where an inexpensive one could do the job just as well."
2025 DAC Beachcroft LLP. All rights reserved.
The AI program is delivering results for our business and for our customers on multiple fronts. It is enabling us to make faster decisions - customers benefit from quicker resolutions, such as knowing immediately where their vehicle will be sent for collection, i.e. whether repair or salvage. It is also greatly improving our operational efficiency, as handlers can manage cases more quickly and effectively with AI support. We also see the benefits of enhanced oversight, because AI acts as a second line of defence, spotting things a human might miss. All in all, AI is facilitating better outcomes – decisions are more consistent and accurate when handlers are supported by AI, which ultimately reduces our indemnity costs and shortens claim cycle times for customers.
AI also has an important part to play in risk management. We are exploring what Generative AI can do in terms of analysing complex data sets, e.g. of historic claims and suggesting risk mitigation strategies to help reduce potential future losses.
It also has a role in fraud detection. AXA uses AI models to assign risk scores to claims, helping specialist teams prioritise investigations. We are also exploring document and voice analysis to detect fraud more effectively.
As for ethics, AXA has built a governance framework to ensure every AI project is developed responsibly. This includes early engagement with data protection and compliance teams, as well as built-in bias testing and monitoring.
A recurring theme was the challenge of getting the pace of change right. As Waqar Ahmed puts it, “the pace of technological change is moving faster than the pace of comprehension, let alone adoption”.
Simon Hammond, of NHS Resolution, talks about the challenge of bringing colleagues with you on the change journey. The key, he believes, is “making sure everyone is on the same page in terms of realistic expectations. Some, of course, may be fearful of the machines taking over from the humans”, he says. “Others, however, will be at the other end of the spectrum, wanting AI immediately and perhaps not appreciating the need for a thoughtfully paced approach and reflection around the guard rails that might be needed, the regulatory issues that sit around it, nor the potential for unintended consequences. There’s a whole range that falls between these two ends of the spectrum… The key is to get everyone to buy into the appropriate pace of change, as well as the change itself.”
Bringing all the people in the business along with you in synch is a significant challenge. How do you achieve this? According to Simon Hammond, “you need to ensure people are kept informed about the timeline and the ‘art of the realistically possible'. It’s about allowing people to understand you can only move at a certain pace – and that moving at that given pace is a critical aspect: bringing people on the journey with you and dispelling myths along the way.”
“There is often an assumption that when you deploy AI, staff perceive it as a threat”, he says. “But we have proven that when it works side by side with claims handlers to augment what they do and assist them in their roles, they are then able to amplify their own human capabilities, such as empathy, which is so crucial to the customer experience. We have seen their engagement with the company increase.”
Further, Andrew Wilkinson of the Motor Insurers' Bureau believes AI can act as a highly effective trainer for claims handlers, helping people learn about the relevant case law and legal complexities on the job. “DACB’s excellent AI tool for credit hire is a great example of AI at its best”, he says. “Whereas previously a handler's learning was in large part by trial and error over time, this tool helps them give an offer and explains the rationale, so helping them as a virtual colleague/assistant best friend whilst training them at the same time."
He is very upbeat about the opportunities AI presents for his colleagues, “We see the potential for AI in our processes as very positive. We don’t see it as placing jobs for the humans under threat, but instead increasing opportunity for our people, making their jobs more skilled and interesting.”
Ian Kershaw, Vice President of Customer Service, Claims and Fraud at Zego, agrees, “I don't see AI necessarily replacing claims handlers. It will just change their roles. So instead of handling 200 claims each, they’ll proactively be able to handle 800 because they'll be supported by technology that enables them to work faster and do more, and potentially to an even better standard as more parts of the process become automated.”
AXA Insurance’s Alexandra Price says it’s about taking a collaborative, transparent approach. At AXA, she tells us, “training and support are embedded into AI rollouts, and feedback loops ensure ongoing refinement. And crucially, AI projects are developed with the business teams, not handed down to them.”
2025 DAC Beachcroft LLP. All rights reserved.
"There is a distinction between Natural Language Processing (NLP) technology (which has the ability to work with unstructured data and produce good, reliable and useful outputs) and true machine learning. NLP is a good starting point and brings many benefits, not least easing the burden on staff and making their jobs easier. We have found NLP extremely useful in supporting our staff in this way, and to garner insights and certainty in our environment, by investigating trends and patterns which we can then feed back into the wider healthcare community, with a view to delivering better clinical services to the public."
When considering the appropriate level and type of technology investment for our organisation and what Generative AI has to offer, the key for us, as with all advancements in technology, is not just to look at the benefit it will deliver for NHS Resolution, but also the wider system we operate in. We are in a slightly unique space because we operate within the health system and are also juxtaposed with the justice system. We are looking for advancements that will give us greater visibility of the ‘concerned’ space, i.e., help us identify where something has gone wrong to a significant degree, which we can then investigate in greater detail and in a wider context, giving us valuable insights to share with other parts of the healthcare system. So, when it comes to deciding on the right technology, for us it is not just about considering what gains we can make for ourselves operationally (saving operating costs, improving consistency and fairness in our decision-making, with less resource), but more about whether or not it will benefit us in what we can deliver back to the health service, both in relation to policy development and also in respect of safer clinical care. But that’s maybe where we come in with a unique perspective, because we are not profit making.
Where we are right now is in providing a more efficient system with the integration of some aspects of ML, but this in its infancy. Where we are moving to is the integration of true AI, to see what we can produce from our data that will help the wider system learn from what we see. We are currently updating our IT architecture so we can integrate AI for the benefits of both our internal processes and to provide external insights.
For want of a better phrase, you need to reassure them that you are not building a ‘robot army’, with the end goal that all the decisions across the organisation will be dealt with by AI, rather than human interaction. Others, however, will be at the other end of the spectrum, wanting AI immediately and perhaps not appreciating the need for a thoughtfully paced approach and reflection around the guard rails that might be needed, the regulatory issues that sit around it, or the potential for unintended consequences. There’s a whole range that falls between these two ends of the spectrum.
The AI has got to interact successfully with the organisation you are working with and its people, because it can drive so many benefits if used in an appropriate way, so you need to make sure the people in the business who will be using it and benefitting from its output, understand how it works and how it is to be used. It is about educating people so they understand the benefits to their own roles.
This risk of algorithmic bias is one we talk about a lot and requires time for deep reflection. The issue of machines making assumptions on statistical evidence, even when data sets are strong, because the AI can’t understand the subtleties that lie behind the statistics. It comes back to how you see the future of AI in the decision-making processes in your organisation. These are conversations we are having continually. Would we ever get to a point where the machine is telling us everything and making decisions? The risks in this are far too huge. At NHS Resolution, we are dealing with a very, very sensitive area of claims management – fatalities of people of all ages, some of the most sensitive health issues that occur in the population and some of the most severe injuries people can have, such as cerebral palsy and birth injuries, where people have life-long impacts. So, for us, we accept there will always be a requirement for an element of human decision-making in all that we do.
Whether integrating NLP or machine learning (I see the two as quite distinct), you need to have not just the right technology platform to support it, but the right data platform as well. From our discussions with other indemnifiers, we know that everyone is facing the same challenges around this, particularly around the issue of ‘legacy data’. Tech suppliers talk emphatically about the ‘Holy Grail’ of a complete and perfect data set, this being the only true way to be sure of reliable and consistent output, but most readily acknowledge the restrictions in data. More on this later…
"Another challenge is, of course, the people element, how staff and colleagues are responding to this drive to bring in AI. There is a lot of excitement about it in our organisation and what it can do, which is positive, but a challenge is to make sure everyone is on the same page in terms of realistic expectations. Some, of course, may be fearful of the machines taking over from the humans."
Where we see the benefits of bringing true Gen AI into our systems, is in using it to learn from our past experience in order to inform our staff more accurately about potential outcomes that may occur, and help with risk management in terms of flagging where the risks to the health service may lie, so we can then work with other health partners in avoiding those risks and seeing less harm in the overall system. There are probably multiple other uses of Gen AI for the future that we see that will bring a variety of benefits. For example, assisting with our pricing models and our actuarial forecasting in relation to our long-term liabilities.
We have already launched the first iteration of our new case management system in parts of our business, and the area covering claims, the largest part of our business, is due to go live with a new case management system in the next two to three months. So this is very much the here and now for us! This has been a couple of years in development, as you can imagine. What it will provide us with is the platform for a true AI environment that we can actually start to operate with.
2025 DAC Beachcroft LLP. All rights reserved.
There are many challenges, like everyone else in this space, and as referenced above, our biggest concerns are around the availability of reliable data, particularly the quality of data from legacy systems. We live in the real world, so that the Holy Grail of a complete and perfect data set can only ever be an aspiration. Indeed, this is why we have seen the rise of ‘data scientists’. We use data scientists, and there is a lot they can do, but there are natural limitations because at the end of the day, they are handling historic data sets, and therefore the data does not hold the level of consistency or granularity that allows correlations to be drawn. This has the potential to become a major issue when you start to apply machine learning.
You need to have the foundation for the right data platform in place, and also the right data sets, for AI to deliver its promised benefits and produce appropriate results that are accurate and can be relied upon. Conversely, if the data is flawed, and this is what I hear quite regularly from the supplier community, then the potential is that people could build AI systems that produce results from pseudo data or from a small sample that is not necessarily representative, which then becomes a challenge when applied to a wider data set. You may then have gaps and therefore cannot produce the same or similar results to prove the output is repeatable and reliable.
"The key is to get everyone to buy into the appropriate pace of change, as well as the change itself. To do this, you need to ensure people are kept informed about the timeline and the ‘art of the realistically possible’. It’s about allowing people to understand you can only move at a certain pace and that moving at that given pace is a critical aspect, bringing people on the journey with you and dispelling myths along the way.
Of course, these characteristics are common to any change initiative. They are exacerbated when external pressures are at play, for example, the Government wanting all its agencies to invest heavily in AI at the moment. Then people see technology transformation as something that has to be done, rather than a nice-to-have investment opportunity for the business."
In essence, this is about the risk of unintended consequences. It also applies to our work, providing insights for our external partners in the wider healthcare system. In looking to derive benefits for our members by identifying the sort of harm that has occurred and quantifying the risk, we need to be very careful in understanding these unintended consequences that could tarnish the information we are sharing. For us in this space, our ambition for AI is about looking at how it can support our staff, both in making decisions and in helping the health system learn, as opposed to AI doing this in its entirety on its own.
"There is a lot of discussion at the moment within the wider AI space about how it is going to be regulated. I think the risk is that the legislative framework will always trail invention and innovations in the tech space. We have seen this historically. I believe the key is for an organisation to understand and set its own risk boundaries, to remain within these regulatory frameworks, and adapt as the law evolves. There are likely challenges coming down the line, and across industries, in relation to how the Data Protection Act interacts with the potential of AI and the ingestion of different data models. I’m talking here about the wider environment, not just at the organisational level but maybe even broader."
Another key element is having a clear data strategy, not just for operational efficacy but for regulatory compliance. You need to be ethical in how you go about collecting, storing and using data, and how you intend to utilise the outputs for any models you are building, whether for financial provisioning, decisions in relation to claims management, or for delivering those insights to the wider health community. Whatever those models are designed to do, you need to have a strategy in place to ensure the models themselves and the way you are using the data are appropriately assessed on a regular basis, to ensure that bias isn’t creeping in and that they are producing reliable results consistently.
In addition, I think there will no doubt be frameworks which will be brought in to regulate how organisations can actually use AI in certain situations for decision-making processes. I think it has to be down to the individual organisation to ensure it sets its own risk appetite accordingly against those regulatory frameworks. This is a recurring conversation internally at NHS Resolution: how our risk appetite fits with the wider technological advancements that might be coming into our environment now and in the future. It’s a double-edged sword; on the one hand, we want to stay well within the regulatory confinements and ensure that the way we use technology is both ethical and on the right side of regulation. On the other hand, we are balancing our risk appetite statement with really wanting to use the new technologies positively and deriving the benefits from them. This is a difficult line for any organisation to set, especially when technology (especially in the Gen AI space) is moving at such a dramatic pace. All this needs to be reviewed continually. How regularly should organisations conduct these reviews? It is probably too simplistic to put a timeline against it, but if you did only an annual review you would soon find yourself out of date. You have to set it against what your ambitions are and what your investment strategy around your tech future looks like, and also against your operational processes, because every time you introduce new aspects of tech your operational processes are going to change, and therefore your risks may change in either direction (less/more). They may improve because you are safer, for example, advancement of some tech may make fraud detection and prevention easier and actually your risks decrease. But, on the opposite side of that same coin is the fact that by ingesting more tech-based decision making, this may present greater risks, such as biases being present. For example, if the pre-event detector mechanisms you choose to adopt end up identifying the wrong categories of individuals for fraud investigations, this could lead to reputational damage, added operational cost and most importantly, lead to a delayed claims process for genuine claimants. So, the nature of risk is going to change depending on your ingestion of tech within your organisation, and possibly improving one risk, but at the very same time heightening another.
You can see exactly why decisions around the ingestion of Generative AI cannot be rushed!
As one survey respondent says, “If we get it right and create the capacity, it should make colleagues’ roles richer, more meaningful by removing the more repetitive/mundane tasks, allowing them to focus on value-added activities.” Another points out that, “Historically, automation has been about removing non-value-add work. AI has the possibility to assist with actual value-add work to really support claims handlers in their jobs and finally offer a benefit to our customers, who are what really matters.”
Waqar Ahmed illustrates the point very elegantly with a picture, “A customer phones their insurer to discuss their claim. The reality is that the claims handler will not be familiar with the particular details of their claim when they call in, so the first thing they’re likely to say is, ‘Can I put you on hold? ‘I'll take you through your security credentials and then let me just get the update to find out what's going on.’ That’s not a brilliant experience for a customer, as we all know from our own experience of dealing with mobile providers, utility companies, banks and the like. It’s a fact of modern life. But we saw that as a problem to solve in the customer’s claims journey, and we thought, what if technology and data science can help us solve this?"
"So, whereas previously our handlers would be going through the notes on the claims systems notes (which typically were not laid out for the handler in a user-friendly way), what we did instead was to pinpoint the key components of a claim that people want to know when they call to discuss it, and then make sure the system displays this for the handler in a far more user-friendly form. We designed this in conjunction with our frontline operations teams who are closest to the customers and understand their needs, and were able to design a product that had real value for our handlers in their ease of being able to understand what was happening in a claim – and real value for the customer in terms of getting answers to their questions much more quickly and not being placed on hold for so long. A win-win!”
Another significant advantage the industry expects from Generative AI is its potential to improve the accuracy of loss assessments. As Simon Hammond points out, “There are likely many other future applications of Generative AI that could deliver a wide range of benefits - for example, enhancing our pricing models and actuarial forecasting related to long-term liabilities.” Indeed, 57% of respondents in our online survey highlighted this as a key motivation for exploring Generative AI. Additionally, 64% of respondents highlight the ability to improve consistency in decision-making as one of the main value-adds. Andrew Wilkinson says he sees AI’s ability to assist claims handlers in the way described above as one of the keys to improving consistency in the MIB’s decisions and assessing loss more accurately.
However, there remains industry-wide caution regarding the degree to which machines should independently make decisions, rather than serve as supportive tools in the decision-making process.
2025 DAC Beachcroft LLP. All rights reserved.
A new feature we have brought in through AI is a multi-language tool, as so many of our customers are not native English speakers. This will pick up straight away if a customer is using English as a second language, and then ask which language they would prefer to speak to us in. That makes an amazing difference for the customer.
When a customer wants to contact us about a claim, because they do this through the app, we already know who they are, what their policy number is, what their excess is, and all the other terms of their policy, also what car they drive and where they live. So they don’t need to feed any of this detail into the system when they start a claim. All they have to do is give us the details of the incident, and our system will match everything up as we kick off the claim.
Another area where Gen AI is assisting us hugely is in fraud detection. These days, it is very easy for fraudsters to ask ChatGPT or one of the other models to fabricate invoices, engineer reports, or even fake photos of their vehicles in a crash situation. But luckily, we've got some technology that can tell us very quickly whether this information has been generated by ChatGPT. Fraudsters are evolving very fast, so it's important that we evolve equally as fast and stay ahead!
It’s just phenomenal how fast the world is changing right now and the way the world is shifting, particularly around Gen AI. As our CEO and founder puts it, unless you are really pushing forward quite aggressively with AI, you are going to get left behind. But, as we do this, we are continually thinking from the point of view of our customers and what the value-adds will be to them, just as much as what that impact is on Zego.
2025 DAC Beachcroft LLP. All rights reserved.
Our main app is called ‘Sense’ and is driven by telematics, so we can offer better prices to our customers based on their personal driving performance. We see this as putting the power back into our customers' hands: if they drive well, they get a better price. We also reward customers for better driving, for example, offering vouchers or potentially discounts on renewals, etc.
Obviously, telematics creates a huge amount of data. We review this to ascertain how our customers drive, where they drive, the time of day or night they are driving, and so forth. This gives us very valuable information about the sort of risk factors that can lead to accidents, which gives us a significant advantage in the market.
We have put a lot of time and other resources into developing our live chat customer interface. We all know how poor the customer experience of using live chat systems can be. Indeed, I’m sure everyone reading this report will have had bad experiences themselves. So, perhaps it is not surprising that some companies are actually stepping back from some of these bot flows. But, we’ve taken a different approach, taking time to understand and address its shortcomings, invest in it and improve it, and we continue to work on improvements every single day.
We were in that very different place maybe four or five months ago. We had a very rudimentary live chat flow, like a lot of companies still have today, where the customers are just given a long list of options to choose from, to categorise their question. And, if their query doesn’t fit neatly into one of these categories, it can be immensely frustrating for them. They can get tangled up in what I call the ‘death spiral’ of just not being able to get the bot to understand their problem. They then just want to speak to a live human who can give them the answer. So, what we wanted to do was create a human-like experience on the live chat, or an even better-than-human experience, and Gen AI has enabled us to do this. Rather than being offered a tick-box list to categorise their question, customers can now simply have a free-flowing conversation with the bot. It’s incredible how much this has improved our customer satisfaction levels. In fact, when they're dealing with the bot now, customer satisfaction is higher than when they deal with human agents!
We are really pleased with where we have got to, even in terms of tone of voice and empathy. To give an example, a customer the other day asked the question on our live chat about why his policy had been cancelled, and he mentioned he’d been out of the country and that his mother had died. The chatbot response was incredibly empathetic, ‘I'm really, really sorry to hear that. I completely understand the sort of turmoil you must be in...”
In terms of where we place humans in the loop, of course, we monitor and quality assess chatbot interactions. They are all stored and entirely auditable. We do this in exactly the same way any insurer does with its claims handlers. Also, we don’t yet leave any decision-making to the machines. Our bot stops short of making decisions. At the moment, from a claims perspective, its role is limited to data capture. So it's taking information from the customer, that information is fed into our system automatically, but then it's a human making the decisions: is the customer at fault? Is the customer not at fault? Where does the customer need to be? What supply function does that customer need? Where does it need to be repaired? At the moment, that's all with a human. I don't see AI necessarily replacing claims handlers. It will just change their roles. So, instead of handling 200 claims each, they’ll be able to proactively handle 800 because they'll be supported by technology that enables them to work faster and do more, and potentially to an even better standard as more parts of the process become automated.
All our interviewees highlight the importance of maintaining human oversight in the claims process. Many strongly reinforced the need for humans to remain in the driving seat when it comes to decision-making, with AI playing a supportive role.
Alexandra Price of AXA Insurance explained that whilst AXA is open to exploring automated decision-making in the future, their current approach is firmly rooted in maintaining a ‘human in the loop’.
2025 DAC Beachcroft LLP. All rights reserved.
2025 DAC Beachcroft LLP. All rights reserved.
Each month, we receive management information from DACB, which I review to identify trends, such as whether incidents are more frequent at certain times of day or in specific geographic areas. Insights like these help us take proactive steps to prevent or reduce future incidents. Any technology that can support this analysis is invaluable, as it gives us the tools to understand what’s happening and apply that understanding to our risk management planning.
"AI clearly has a role to play in this evolution, and it’s undeniably the direction things are heading. However, I strongly believe that human oversight must remain part of the process. Experienced claims handlers, those with 20 or 30 years of experience, often have an instinct for when something doesn’t feel right, especially in cases of potential fraud or exaggeration. That kind of gut feeling is difficult, if not impossible, for machines to replicate. It’s a critical component of the process that shouldn’t be underestimated.
People in our business, especially within my insurable risk team, are highly adaptable and open to change, innovation, and new ideas that improve how we operate. As a company, we’re already well-versed in adopting new technologies, particularly in logistics, for example, automating processes in our distribution centres to get stock out of warehouses and into shops as efficiently as possible."
"As I’ve said before, one of the key issues for us is ensuring that any supplier or partner we work with is aligned with our values and approach when it comes to the use of technology."
2025 DAC Beachcroft LLP. All rights reserved.
2025 DAC Beachcroft LLP. All rights reserved.
The people aspect of change is an essential part of our technology journey. The key to bringing our handlers and the wider business with us is to show that what we are doing and trialling will make their jobs easier and less burdensome. We have all had the experience that the promised benefits from expensive technology investment come to nothing, so tangible results are necessary to show that the benefits are real. We have also all seen and heard the claims by technology suppliers that the shiny new system will mean headcount can be reduced - but this is, in my experience never true! Rather, good technology will change the way people go about their jobs.
There is a degree of excitement in the business around AI – new toys, new tools, and this has to be managed too, as we want the pace of AI adoption to be appropriate for the business. There’s a spectrum of course, with reticence and cynicism at one end, from people who have too often seen new technologies fail to live up to their promise; and with progressives at the other end of the scale, impatient to delegate their drudge work to machines so they can focus on the more interesting investigation and negotiation aspects of their role. As leaders, we have to bring everyone along with us, not just in the need for change, but the appropriate pace of adoption as well.
We are a small organisation relative to others in the insurance space, which means on the one hand we can be fast adopters, but budgets can be an issue. We are, of course, funded by levies from all motor insurers and, ultimately, from their customers’ premiums, so we have to be mindful of this when considering expensive outlay on bespoke technologies. However, an option for us is to piggyback on systems developed and made available to the market by insurers, taking their AI systems’ capabilities and adapting them for our own purposes. This is something we are exploring.
"AI also has a role to play in fraud detection, particularly in identifying exaggerated or false claims by looking at trends and patterns that trigger the need for a more detailed investigation. But, given the nature of our work with investigation at its core at the get-go, we are well set up for this. We see the potential for AI in our processes as very positive. We don’t see it as placing jobs for the humans under threat, but instead increasing opportunity for our people, making their jobs more skilled and interesting."
The Motor Insurers' Bureau (MIB)’s founding principle is that no one injured by an uninsured driver, or hit-and-run incident, should be left without the support they deserve. Our long-term goal is to eradicate uninsured driving completely, and to achieve this we know we must find ways to go further and faster. So, of course, we are interested in exploring how Generative AI can help us. Our mission is not just about handling individual claims, but serving the wider community and making our roads safer.
We look at claims in the context of a value chain, before a claim arises, i.e. identifying geographical hotspots for uninsured driver incidents and hit-and-runs, when an incident and a claim occurs, instructing suppliers and partners (including lawyers), how we approach negotiating settlements, how we handle data and management information, how we manage workflows and time, and how we analyse data and draw conclusions. We certainly see a role for AI in the pre-claims process, working with the police and the DVLA, for example, using cameras to predict hotspots. Also in the investigation of claims, digging to find insurers or the identity of drivers using ‘connected vehicle’ technology, such as getting information where appropriate from satnavs, phones and other internet connectivity to pinpoint who was in a car at a particular time and place, but this is several years down the line. We see the role of AI in assisting decision-making, but not making decisions by itself. There will always be the need for humans to take responsibility for decisions, but AI can do a lot to assist claims handlers. For example, collating information about the claimant’s medical records, the events surrounding the damage, presenting summaries to bring handlers up to speed more quickly and then putting documents together for experts or partners to prepare them for negotiating settlements, even perhaps using estimating assistance tools. We can see a benefit here in improving consistency in our decisions.
We tell our handlers to think of AI as a virtual colleague sitting next to them, or an assistant best friend. It can also be a highly effective trainer, helping people pick up the relevant case law and legal complexities on the job. DACB’s excellent AI tool for credit hire, Nightingale, is a great example of AI at its best: whereas previously a handler's learning was in large part by trial and error over time, this tool helps them give an offer and explains the rationale, so helping them as a virtual colleague/assistant best friend whilst training them at the same time.
Humans are unique, and it takes a human to understand that. For example, our handlers are very much alive to the fact that different people react differently to a traumatic experience, impacting how they present to experts and even the ways their symptoms manifest. But, if AI can remove a big part of our handlers’ admin load, they will then be freed up to spend more time on this human element of their jobs, which could bring significant benefits to their work and to claimants’ experience. Speaking personally, by way of example, I love negotiating settlements, but all the painstaking admin involved in the run-up, not so much!
Concerns about bias in AI-driven analysis are less of an issue for us compared to policy-writing insurers, because our work is based on factual events rather than drawing conclusions from statistical data and pricing according to the likelihood people will behave in a certain way. But, we are concerned about the handling of personal and sensitive data and what we are inputting into machine learning, which is why any AI we pilot or use is contained in a closed system and is not web-based, and why we are very cautious to conduct any pilots in a safe environment. In any event, the regulator will be involved in how AI is used in our industry, in terms of the customer journey and ensuring correct and appropriate outcomes. It will be interesting to see how the regulatory framework develops.
This is where training becomes essential. Several organisations have responded by creating formal education programmes for colleagues around AI and data, to help them understand AI systems, identify potential flaws or biases and learn how to ask the right questions and interpret AI outputs critically. Some have even established Data Academies. Others take a less structured approach. While approaches vary, the direction is clear: effective AI adoption demands confident, well-informed human operators, and the industry is taking great strides to update the way its professionals are trained, and we can expect to see more advances in education in this important area as time progresses.
“The Holy Grail of a complete and perfect data set can only ever be an aspiration… This has the potential to become a major issue when you start to apply machine learning.”
2025 DAC Beachcroft LLP. All rights reserved.
However, the gains that AI offers come with caveats. The biggest challenge cited across the board is data quality. Over 70% of survey respondents flagged legacy systems and imperfect data as a significant risk to the successful implementation of AI.
“Our biggest concerns are around the availability of reliable data – particularly the quality of data from legacy systems.”
Interviewees repeatedly stressed the need for human oversight and education in reviewing AI output - especially when dealing with messy or incomplete datasets. As Simon Hammond puts it, “We live in the real world, so that Holy Grail of a complete and perfect data set can only ever be an aspiration... This has the potential to become a major issue when you start to apply machine learning.” And, of course, readers of this report will all be aware of the danger of machine hallucinations.
2025 DAC Beachcroft LLP. All rights reserved.
“Ethics has to be the first thought, and remain front and central, when considering any new technology initiative or investment."
A final area of widespread concern is governance, particularly in relation to ethics and Environmental, Social and Governance (ESG) compliance. Interviewees raised concerns about how Generative AI could unintentionally reintroduce bias or undermine transparency in decision-making. Transparency, auditability, and clear lines of accountability are increasingly viewed as critical components of responsible AI integration.
Waqar Ahmed cautions that Generative AI technologies are still very nascent: “They are still emerging and developing. We are only beginning to understand the capabilities and limitations of this technology, and so we need to make sure to put the correct guardrails in place.”
Simon Hammond talks in detail about the importance of aligning the business’s risk appetite with the wider technological advancements coming down the line. For his organisation, he says, “It’s a double-edged sword: on the one hand… ensuring the way we use technology is both ethical and on the right side of regulation. On the other hand… balancing our risk appetite statement with really wanting to use the new technologies positively and deriving the benefits from it”. He says, "It’s a difficult line for any organisation to set when technology (especially in the Generative AI space) is moving at such a dramatic pace... All this needs to be reviewed continually”. He advises timelines for reviews, “set against what your ambitions are and what your investment strategy around your tech future looks like, and also against your operational processes”.
Ethics has to be the first thought, and remain front and central, when considering any new technology initiative or investment. We involve our Ethics Committee before we do anything in this space, so we think about data protection, the possibility of bias and governance issues before we think about producing any product. Yes, we have the capability to deliver products much faster, but actually what is more important is that you can be sure you are producing those products in a well-tested, well-understood, and well-governed environment. It may mean your speed to market is a little bit reduced, but you know you’ll be coming to market in a responsible way. It only takes one instance to lose customer trust.
For now, keeping humans in the loop is imperative, given the fledgling nature of the technology. But, I can see a time in the not-so-distant future where ‘human in the loop’ evolves to ‘human oversight. I think this is inevitable, because if we go back to our primary focus for technology development and investment, our customers, we have the potential to build products that are ‘always on’ for them. So, irrespective of when they have a claim, they have the ability to interact with us at any time of the night or day, to interact with us so we can help them in their moment of need. But, we must ensure that when a customer interacts with AI systems, they get the same great customer outcome that they would if they were dealing with a human handler. Wouldn’t that be a great thing for the industry: to transition that great human service to a digital experience that's always on?
We started a significant claims transformation three years ago, focusing on evolving to a data-driven intelligence-led function, where people are at the heart of what we're doing. As we evolved, we have developed specialist people, capabilities and skills, and we have built a number of machine learning tools and Generative AI tools to augment what they are doing. Now, we have what we call our Voice of Aviva engagement service for staff, which involves a survey that goes twice a year to all of our people who operate in claims, to get a sense of how engaged they are and how they find working at Aviva. What’s interesting is that our engagement scores have tripled over the three-year period of our tech transformation. There is often an assumption that when you deploy AI, staff perceive it as a threat. But we have proven that when it works side by side with claims handlers to augment what they do and assist them in their roles, they are then able to amplify their own human capabilities, such as empathy, which is so crucial to the customer experience.
I also believe that the use of technology to support claims handlers and augment their capabilities will soon become hygiene factors when recruiting and also retaining staff.
2025 DAC Beachcroft LLP. All rights reserved.
Aviva was one of the earliest adopters of AI in the industry, starting some 10 years ago. But it’s not about being ‘first’ – it’s about being smart with technology to be customers’ ‘first choice’."
We all recognise the importance of these points from our own retail experience - what those familiar retailers that you go back to time and time again have in common, despite the quality of their product, is that the service they provide is underpinned by these three things. When someone has a claim, it’s usually stressful for them, and what they will do because they've purchased our product is turn to us for support, and it is in that moment of truth that we are measured.
Imagine this scenario: a customer phones their insurer to discuss their claim. The reality is that the claims handler will not be familiar with the particular details of their claim when they call in, so the first thing they’re likely to say is "Can I put you on hold? ‘I'll take you through your security credentials and then let me just get the update to find out what's going on". That’s not a brilliant experience for a customer, as we all know from our own experience of dealing with mobile providers, utility companies, banks and the like. It’s a fact of modern life. But, we saw that as a problem to solve in the customer’s claims journey, and we thought, what if technology and data science can help us solve this?
So, whereas previously our handlers would be going through the notes on the claims systems (which typically were not laid out for the handler in a user-friendly way) what we did instead was to pinpoint the key components of a claim that people want to know when they call to discuss it, and then make sure the system displays this for the handler in a far more user-friendly form. We designed this in conjunction with our frontline operations teams who are closest to the customers and understand their needs, and were able to design a product that had real value for our handlers in their ease of being able to understand what was happening in a claim – and real value for the customer in terms of getting answers to their questions much more quickly and not being placed on hold for so long. A win-win!
Given the continuing pace of change in the technology space, the Strategic Advisory Team intends to keep a watching brief. Stay tuned for information about our future reports, events and other resources on this topic. Want to stay informed and connect with industry peers as you navigate the implications for your business?
Peter Allchorne
Partner
Strategic Advisory
T: +44 (0) 117 918 2275
E: pallchorne@dacbeachcroft.com
Alexandra Price
Senior Programme Manager (Analytics)
AXA Insurance
Simon Hammond
Director of Claims Management
NHS Resolution
Ian Kershaw
VP of Customer Service, Claims and Fraud
Zego
Julie Plumb
Insurable Risks Manager
Tesco
Waqar Ahmed
Claims Chief Operating Officer
Aviva
Peter Allchorne
Partner
Strategic Advisory
DAC Beachcroft
Craig Dickson
Chief Executive Officer
CSG
DAC Beachcroft
Joanna Folan
Legal Director
Strategic Advisory
T: +44 (0) 207 894 6350
E: jfolan@dacbeachcroft.com
Michael McCabe
Solicitor
Strategic Advisory
T: +44 (0) 207 894 6315
E: mmccabe@dacbeachcroft.com
Andrew Wilkinson
Chief Claims Officer
Motor Insurer's Bureau
Gabriel Biangolino
Value Creation, Head of Strategy
Admiral Group
Right Honourable Sir Robert Buckland
Former Lord Chancellor and Secretary of State for Justice, and now member of DACB's Policy Unit
This report shows an industry on the move: optimistic but thoughtful, ambitious but grounded. The story of AI in the insurance claims process has only just begun, and as understanding deepens, both the technology and those who work with it will continue to evolve.
The opportunities are immense for those who get it right, as eloquently summarised by Alexandra Price in this final word:
2025 DAC Beachcroft LLP. All rights reserved.
"Our AI program is delivering results for our business, and for our customers, on multiple fronts: it is enabling us to make faster decisions - customers benefit from quicker resolutions, such as knowing immediately where their vehicle will be sent for collection, i.e. whether repair or salvage; it is also greatly improving our operational efficiency, as handlers can manage cases more quickly and effectively with AI support; we also see the benefits of enhanced oversight, because AI acts as a second line of defence, spotting things a human might miss. All in all, AI is facilitating better outcomes – decisions are more consistent and accurate when handlers are supported by AI, which ultimately reduces our indemnity costs and shortens claim cycle times for customers."