
Andrew Putwain: Within the insurance sector, how do you reduce operational costs and improve efficiency without negatively impacting output levels or risk control?
Leonard Schokker: If you look at this issue through the lens of realising cost efficiencies, then it's about how to choose different use cases.
There is a lot of literature about it and a lot of consultancy cases that show what works and what doesn't. Mercer's 2024 research validates this approach - 91% of investment managers are implementing AI, but success comes from systematic domain transformation rather than scattered pilots.
For instance, McKinsey's analysis shows this systematic approach can deliver 25-40% cost base impact when executed comprehensively.
If you look at the output improvements, I'm talking about not just how we can improve output, but also how we can reduce the number of costs or the number of mistakes. It's about quality control.
That then also links to risk control. There's an efficiency gain, which closely ties into cost control.
From my point of view, what I see is that it's important that you do not randomly choose use cases and experiments in independent pilots.
Instead, it's good to have a strategy around what you’re looking for and to tie different use cases together to make sure you’re getting the full picture with pros and cons.
"It's also important to ask how to choose the right technology
for the right process."
For instance, we see in insurance and investment management that what works is to choose one domain and focus on that first. It could be technology; it could be investment reporting or other processes. In investment management specifically, this includes alternative investment due diligence - where AI can analyse unstructured data from management presentations and financial projections.
Then, it's also important to ask how to choose the right technology for the right process.
I've also been talking about this with some consultants, and what I found out is that, looking at artificial intelligence (AI), it's strong in aspects such as scenario analysis and pattern recognition. But as we all know, it also tends to make mistakes.
AI makes up things or misinterprets them because it's as quick at predicting what it thinks you mean, which can be frustrating sometimes.
What's important is that first of all, you choose a domain or one or two.
Also, in investment management, you could use something along the lines of an investment process. I've seen that in research, for instance, that with Mercer, it’s the IT function and the technical function that seem to be the most promising in terms of cost efficiencies.
Andrew: Yes, when AI makes things up, are people asking for safeguards to be put in place? I.e., are there more barriers that could prevent it, or is it an area that's still undergoing more testing?
Leonard: There's an increasing amount of attention from software providers and AI providers to build this into the architecture.
Claude Financial Services is a good example that has recently been released, and it focuses strongly on audit trails, controls, and reconciliations, etc., which is a strong point in quality control.
But what I also see is that many people, particularly in finance, for instance, there's a lot of talk about using AI for different finance processes. But then it's mostly about “which prompt do I use to achieve task reporting or analysis for the CFO?”.
If you do that in just generic AI, such as Microsoft Copilot or other well-known programmes, there's still a big risk of hallucination. You have to be strongly focused on that as a user if you do that yourself. You then need to make it part of the architecture, which is preferable.
Andrew: With the growing influence of AI, how significant a role does technology play in helping companies reduce the cost of their operations? Where can savings be made in the long-term planning strategies?
Leonard: There's a lot of potential for cost savings from AI and technology in a broader sense.
However, it's good not to view AI as a one-size-fits-all solution. AI works well in processes where you have scenario analysis or pattern recognition, for instance, if you have more processes on fixed rules that require definitive outcomes. Typically, existing data management solutions would fit better, so you could think about the reporting process for Solvency II, or management reporting, etc., as an example of this.
Those processes typically benefit more from, for instance, traditional automation than from AI.
Andrew: How do companies ensure that the skills of AI and human contributions are well-balanced to become the most efficient?
Leonard: What you see is that the collaboration between AI and humans is still being discovered and worked on.
It's not like AI is going to replace us all simply because it doesn't always know the context.
It predicts things, so you still need to apply that context and give it context and also apply your own knowledge.
What it could do is to improve different processes and make them faster. For instance, pattern recognition could help you in analysing ESG factors or alternative investments, and then AI could generate ideas as well on how to invest or new business ideas.
"This represents an evolution from 'human in the loop' models, where humans
heavily train AI systems, to 'human on the loop' approaches."
It would be able to help with that creativity element that way as well by being able to quickly recognise patterns and synthesise those so that the human side comes in, in interpreting that input and systematically thinking about it, asking critical questions also to AI, for instance, “you've found this, but are you sure? Because this reasoning is not completely solid” and challenge it as you would do a colleague.
This represents an evolution from 'human in the loop' models, where humans heavily train AI systems, to 'human on the loop' approaches, where investment professionals take coordinating roles. Research shows 74% of insurance executives view building employee trust as essential for capturing AI benefits - creating virtuous cycles where trust enables adoption, which drives innovation and results.
This is something that adds the most value.
Andrew: Taking into consideration the cost of operating and AI, how do we use these to build higher performance standards - or should that endeavour be kept separate?
Leonard: What I mentioned earlier is that quality control is an important aspect.
For instance, how do you design quality controls in AI to maintain those performance standards, and how do you systematically involve that in the AI processes?
Part of that is in hard controls and reconciliation, or auto trails. Part of that is also in the oversight function.
For instance, an interesting case recently discussed in Harvard Business Review is that Amazon used AI to populate its product catalogue, which had an 80% failure rate on the information that was in there. They improved to a 20% failure rate.
Now that's still unacceptable for an insurer or a finance function. But it shows the improvement that you can make using AI and using it systematically.
"The timing factor proves critical - AI leaders in insurance achieve 6.1 times
the total shareholder return of laggards over five years."
An important consideration – maybe not so much for the investment side, but more of the insurance side – is how AI models use their information? Are they discriminatory, or are they biased in some way?
An important aspect early in the process of AI, when you're implementing AI, is to implement the correct model validation, which is a completely normal procedure for insurers and investment management teams. It’ll be an important feature.
The timing factor proves critical here - AI leaders in insurance achieve 6.1 times the total shareholder return of laggards over five years, suggesting competitive windows close faster than traditional technology adoption cycles. Organisations still in pilot phases face increasing competitive disadvantage as systematic AI adoption becomes industry standard.
Andrew: I’ve heard several people from insurers discussing who is allowed to use AI at their companies. Only pre-agreed people in the company can use ChatGPT, for example, and it’s banned or disabled for others, as compliance teams are worried about financial data or sensitive information ending up on external servers. Do you think that this will be more common or not?
Leonard: Companies will be more open to using general AI, so if you use the generic Microsoft Office suite, it will become more common.
Looking at different functions like investment management or finance, etc., there will be more specific use cases for AI where the people working with that data will use AI in specific AI agent-type environments.
We're moving towards an environment where there are a lot of specific tasks being automated, where you have a fixed set of instructions for doing one task and all those robots, as it were, will have to work together.
We'll have to be orchestrated into one big process.
Leonard will be speaking at Insurance Investor Live | Benelux 2025 on November 13, 2025, in Amsterdam. Find out more and how to register here.
Please Sign In or Register to leave a Comment.
SUBSCRIBE
Get the recent popular stories straight into your inbox