Podcast

Episode: 559 |
Paul Gaspar:
AI Project Case Study:
Episode
559

HOW TO THRIVE AS AN
INDEPENDENT PROFESSIONAL

Paul Gaspar

AI Project Case Study

Show Notes

In this episode of Unleashed, Paul Gaspar discusses his experience working with artificial intelligence at a major global insurance conglomerate in Japan. The company faced pressure to streamline operations and reduce costs within its auto business. Paul, who was in a role leading the data science function, suspected that the claims area in insurance was a target-rich environment for delivering value with advanced analytics and technology. He found that similar processes were being utilized on claims regardless of the size, leading to the opportunity to put analytical rigor behind the claims estimation process.

 

AI Use for Processing Insurance Claims

Paul and his team looked at information flows at various points in the process, specifically evaluating how information collected at the time of the accident could be used to provide insight on losses. Using this information, they built predictive models using AI techniques that would allow them to predict the ultimate value of these claims from a $1 perspective, using a subset of the initial information collected at the time of loss. By building models that could do this quickly and accurately, they were able to set thresholds that would allow for automated processing and payment of claims amounts on about a quarter of the total claims volume. This reduced the workload for the team handling claims and sped responsiveness to customers with smaller claim amounts.

 

The Process of Assessing Information

Paul explains the process of assessing the quality, consistency, and reliability of information for a client. This involves assessing the types of information, blending them with data analysts experienced with using different modeling techniques and programming languages. Paul and his team used Python to investigate particular approaches, and testing results to identify useful data elements for creating meaningful insights. This process is not necessarily feasible for a data analyst with minimal data science knowledge. Instead, a step-by-step approach involves evaluating the data, considering viable modeling techniques, and experimenting with them to ensure accuracy, speed, and processing power. A team of experienced data scientists can help guide the technical approach and modeling techniques used in the case. This approach is essential for evaluating claims and determining the appropriateness of claims based on the available data. To ensure precision across various claim types, it is crucial to segment claims by value and look at the ones with the lowest value. This helps identify potential risks and minimizes leakage, which is the risk of overpaying for claims relative to processing costs. Predictive analytics is a complex art and science, and it is essential to be careful about how and where to use it, ensuring that risks are well understood and balanced against the benefits of the process.To turn a scalable business process into a working scalable business process, Paul states that change management work must be done across various functional areas. This includes ensuring that information is passed into payment systems, how automation impacts existing processes, and how to contact customers and inform them of potential benefits.

 

Building AI Algorithms to Prevent Human Errors

In the claims process, Paul states that human errors can be a significant issue, as they can lead to false positives and false negatives. To prevent human errors, AI algorithms should be trained to match human judgments and set error tolerance thresholds. This is a time-consuming part of the process, and it is essential to work with claim handling professionals to assess the performance of the models and identify errors. He also mentions that risk management is crucial in ensuring that systems make accurate decisions and avoid making mistakes. Machine learning operations (ML ops) have emerged as a concept that accounts for model performance over time, and it is crucial to continually monitor and adjust models as needed. To ensure that the model does not become overly sympathetic to human errors, it is essential to conduct testing and monitoring over time. Companies that excel in this field have developed software programs that allow for systematic monitoring of decisions. By setting thresholds and balancing processing time and error, companies can set acceptable thresholds and auto-process claims at risk-acceptable levels.

 

The Evolution of Predictive AI

Paul discusses the evolution of predictive AI, specifically generative AI, which uses existing knowledge bases and training models to generate content that is most likely to be related to an end user’s query. This is the basis of foundational models used by open AI and Perplexity to create a new paradigm and use case for predictive AI. The accessibility, power, and intuitive nature of these models make them exciting for experimentation. Generative AI tools have become multimodal, allowing them to take textual, voice, image, or video inputs and respond to queries about that type of content. This allows for an incredible range of possibilities, even in the mobile first world. For example, in the case of auto claims, the estimation process could change from a low value subset to a higher value and sophistication of claims. The multimodal input, the ease of interaction with providing information to these tools, and the ability to access from both practitioner and end user perspectives are key game changers in the future of predictive AI. Paul emphasizes the importance of change management in implementing AI tools in corporations.

 

Timestamps:

01:04 Implementing AI in claims handling at an insurance company

08:34 Using predictive analytics in claims processing

13:41 AI-powered claims processing and error management

18:25 Generative AI’s transformative potential in various industries

 

Links:

LinkedIn: https://www.linkedin.com/in/paulmgaspar/

One weekly email with bonus materials and summaries of each new episode:

 

  1. Paul Gaspar Unleashed

SPEAKERS

Will Bachman, Paul Gaspar

 

Will Bachman  00:02

Hello, and welcome to Unleashed. I’m your host will Bachman. And today’s episode is another one in our set of ai case studies. And I’m happy to introduce Paul gas bar. Oh, welcome to the show.

 

Paul Gaspar  00:19

Hi, well, it’s great to be here. Thanks for having me.

 

Will Bachman  00:21

Yeah. So Paul, tell me about your first case study that you want to share of how you’ve, you know, helped implement or work with artificial intelligence at one of your clients.

 

Paul Gaspar  00:32

Sounds great. Well, the situation I’m going to describe here was at a major global insurance conglomerate that was faced with pressure to streamline operations and reduce costs within its auto business in Japan, I was in a role leading the data science function for the company. So I was actually employed by this company, and was fortunate to have the ability in my role to look at a wide range of functional areas for opportunities to deliver value with analytics. I had long suspected that the claims area in insurance was a target rich environment for delivering value with advanced analytics and technology. And for those who aren’t as familiar with the insurance landscape, your claims and handling kind of, you know, I think everybody can relate to auto insurance. But your claims is, is one one part of the insurance value chain that is very, very closely linked to direct cash outlays for the company, right. So if you and I were to have an accident, you know, we would be paying for the damage of a car or an injury, you know, those types of things. And so it’s obviously a very immediate and direct expense for the company. It’s often commonly said in within the industry that claims is where the big checks are written. And anything that you can do to improve performance in this area has a good chance of falling straight to the bottom line in terms of performance. So, you know, I had a strong hypothesis that there was opportunity here, and I had my team take a careful look into the inner workings of the organization that handled the auto business and saw that there are processes well, very highly structured and handled with great care weren’t necessarily designed around efficient utilization of resource. So what you would find is that similar processes were being utilized on claims irrespective of the size of that claim. So we saw similar levels of work effort being put towards all claims, whether they were valued at $500, or $50,000. So therein lies sort of the you know, the opportunity here, we recognize that by being able to put analytical rigor behind the claims estimation process, we’ve not only be able to save significant processing effort, but also shortened response time to our end customers on a large portion of claims volume. And so what we did in this case was to take a careful look at information flows at various points in the in the process. And specifically, we looked at information that we collected at the time of the accident, to really kind of evaluate how this information could be used to provide insight on losses. And using this information, we build predictive models using AI techniques that would allow us to predict the ultimate value of these claims from $1 perspective, using a subset of the initial information that was collected at the time of loss. And by being able to build models that could do this quickly and accurately, at the very beginning of the process, we were able to set some thresholds that would allow for automated processing and payment of claims amounts, on about a quarter of the total claims volume. So as mentioned, this not only reduced the workload for the team that was handling that claims, but also sped responsiveness to customers that had these small claim amounts. So I’ve kind of kind of blown through this in pretty short order. I’ve provided a bit of an overview and context and the approach that we took. And I just want to stress that though it sounds relatively straightforward. It glosses over a lot of the details and the analytical work and specifically that change management that was needed to deliver value. So on the data and the modeling side, we had to take careful intern, you know, and careful inventory, as I mentioned, of data, we would kind of look at the quality and the consistency of this information. And then we could you know, we then did a lot of experimentation around us kind of, you know what was viable to use or not using the modeling. We also spent significant time working to find modeling approaches that could provide Why’d the right levels of accuracy and decision making agility that we needed for this segmentation to be meaningful? Let me just stop here. I’ve been rambling on for a bit. Do you have any questions? Well, or, you know, things that you want to probe on at this point?

 

Will Bachman  05:14

Yes, I do. I’d love it. If you could unpack the one sentence on, oh, we built predictive models using AI techniques. So for a consultant, who is, let’s say, has not done that before. Could you tell us a bit more of a how to guide if if you were explaining this to a friend who’s going to have to go do this now for a client? Like what, what, what’s involved in that? What, in terms of what what’s the preconditions required? What sort of technology is used? For someone who’s not a technologist? What, you know, what has to happen to, you know, to complete that sentence that you did?

 

Paul Gaspar  05:56

Yeah, for sure. So, look, I think the first thing is really kind of an assessment of the types of information that exists and to assess sort of the quality, the consistency and the reliability of that information. You know, so it all starts with sort of with data, obviously, and I think beyond that, you know, this is sort of where I think this kind of blends, and this is where kind of interaction is and good coordination is needed with, you know, people who are kind of experienced that kind of using different types of modeling techniques, you know, working within sort of kind of programming languages, so we use kind of Python to investigate, you know, particular approaches to kind of the, the, the type of modeling that we would be able to, to use, and then there’s obviously, kind of a great deal of testing of results, to be able to see kind of which particular data elements and characteristics are useful for creating kind of what we call a signal, ie, getting meaningful insight and being able to kind of help differentiate kind of the the evaluation of one claim versus another, like, where their injuries where they’re not injuries, you know, you know, what kind of like, data that you would have quantitative data, being able to kind of create a mix of that qualitative and quantitative data, right. So this is not something necessarily that you could, you know, hey, a data analyst with minimal data science knowledge could actually do and execute. But in terms of like, kind of a step by step approach, I think, you know, there’s sort of an evaluation of the data, there’s, you know, thinking about what kind of modeling techniques are viable, being able to experiment with those modeling techniques, and then see if those modeling techniques are performing not only in terms of the accuracy of the, the predictions that they’re making, but also the speed and the processing power at which, which are needed to actually, you know, kind of execute. So at a high level, that’s sort of like the steps that I would take, again, you know, models can have varying levels of sophistication. And I was fortunate to kind of have a team of very experienced data scientists to help me kind of work through the the technical approach and the modeling techniques that were utilized in this case.

 

Will Bachman  08:34

So kind of at a big picture, you are, as a first step, you need to do an inventory of all the data that exists or the the inputs to the process, like what data is captured at the beginning, you want to capture the historically, that incoming data as well as the decisions that were made by the humans who were doing the process. And then you sounds like you would test and iterate and build an algorithmic model that would try to imagine come relatively close to the humans, or maybe make better decisions. If the humans were if we think the humans were not making good decisions. If you thought humans were already making optimal decisions, then you see if you can make an algorithm to match that using all or some subset of that input data. And then that would validate if you’re going to be able to use AI to sort of simplify the process in some of the cases. So tell me if I missed anything there and also maybe get into that sounds simple, but in practice for a consultant who is going to go do something like this, what are some of the watch outs or some of the tips or some of the lessons learned of, you know, that make make it tricky to do in practice?

 

Paul Gaspar  09:54

Sure. So look, I think that you know, just to add to your comment, um, Earlier, you know, one of the things that we looked at very carefully is, you know, we didn’t go to cricket and pretend to think that we could build your kind of precision across all of the various claims types. I think one of the key kind of phrases here is to segment claims in a way that was, you know, very much kind of acceptable to everyone who was sort of managing and accountable for the process. And so what we tried to do is sort of find ways to segment your kind of claims, by value, and to look at the ones with the lowest value add to be able to reliably identify those claims, right. So the risk of Miss handling that claim, even if you paid it, and auto process that the risk of you know what we call leakage, and that means, you know, for overpaying for those types of claims, you know, relative to the cost of processing, and was actually pretty, quite big and quite minimal. So, I think one thing to remember is that, you know, to some degree, this is sort of the predictive analytics is a bit of art and science. And you need to be very careful about kind of how and where you use it, and that the risks are kind of well understood, and way, way well against kind of the benefit that you hope to get from the process. And so I think, you know, that’s sort of one thing that I would try and highlight. And then as to the second part of your question, regarding watch outs. Look, there are a lot of companies nowadays who, you know, kind of recognize and see the value of, of investing in in analytical teams, and being able to make kind of data driven decisions. That said, you know, one should always remember, in my view, kind of all of the change management work that needs to happen around the intelligence and the data that is used in the decision making process. So in the case I just talked about, you know, in order to turn this into a working scalable business process, we had to, you know, kind of work in many different functional areas. So it’s not just about kind of saying, Okay, we’ve been able to set a segment the the claims into kind of high and low value and those that are meant for auto processing, and those that need to kind of, kind of go through the traditional process. You know, we had to kind of work very, very closely, you know, to kind of think about, well, how does this information get passed into, you know, kind of our payment systems? How does the, you know, kind of the automation impact our existing process flow? And how can we make sure that it doesn’t disrupt the process? How do you actually contact customers and make them aware of potential process changes that are beneficial for them in many cases? But you know, are, you know, are we ready from a communications perspective, to be able to, you know, kind of share this change and share out what benefits them? Right. So, I would tend to say that, in my experience, what people have underestimated is the amount of kind of work change management planning and execution that is needed for a, an analytically driven solution to deliver business value.

 

Will Bachman  13:41

How did you think about, you don’t want to replicate human errors if there was such a thing? So I could imagine with claims process with the humans, initially, the existing set of data, there could be two types of errors. One might be false positives and false negatives. So false positive would be if you pay a claim that you should not have, right, and a false negative would be if you deny a claim that you should have paid. So you’re trying to get your algorithm to make good decisions. Was that was that an issue? Or when you looked at those false negatives and false positives by the humans? Maybe that was just a such a small component that you didn’t worry about it? But if I mean, it was big, you wouldn’t want to train your AI to match humans if humans were like, overpaying in some cases?

 

Paul Gaspar  14:32

No, it’s a great point. And, in fact, that’s one of the very time consuming parts of the process, right is to, you know, work with people that are kind of responsible for handling claims, to be able to show them the performance of the models and looking specifically at sort of your false positives and negatives and assessing kind of, you know, what types of errors tend to show up in those perspective buckets and then to To kind of further tweak the processes in a way that, you know, with models, and also with kind of human judgment about kind of thresholds for kind of error tolerance, right to kind of set parameters around which this process will work, and where, you know, it actually doesn’t need to get reviewed by humans. So, again, this is sort of where I talk a little bit about kind of, you know, kind of risk management and taking care to, you know, kind of do adequate testing, you know, and planning or those contingencies where, you know, your systems may be making mistakes, and to be clear, right. One of the things that has emerged over the years is sort of this whole kind of notion of, you know, machine learning ops, right. And, you know, one of the things that, that ML ops accounts for is kind of the performance or in the drift of, of model performance over time, and the need to be able to continually look at how models are performing, adjust them where needed, and really kind of weigh in understand, you know, where the models are making accurate decisions and where they’re making mistakes.

 

Will Bachman  16:21

Yes, say more about that. So you wouldn’t want your model to start feeling like overly sympathetic with with the, with the person, maybe you were not using a Gennai GNA AI model that was taking all that into account, it was more of a straight up symbols and numbers kind of thing. But you would want I guess, you have to have some testing and monitoring over time to make sure it’s continuing to perform, how you expect and rejecting some claims or not overpaying claims, like how do you how do you do that monitoring?

 

Paul Gaspar  16:52

Yeah, I think, look, in the past, it was it was sort of a highly manual exercise, I think, you know, there been, you know, there are toolkits and now software, commercially available software programs that allow, you know, for systematic kind of monitoring of decisions, right. So there’s an entire kind of, you know, ecosystems that are being kind of built and created and maintained by the companies that do this well. So, you know, again, we managed it kind of early days by, you know, kind of doing kind of a lot of testing and, you know, kind of you’re very careful setting of thresholds. So what I mean to say is that, like, at certain points, the processing time versus sort of the error, you know, that trade off was something that, you know, was kind of acceptable, right, and you could set your, your thresholds and your claim, you know, kind of what, what, what will get auto processed at levels that were kind of acceptable from a risk perspective.

 

Will Bachman  18:02

I think you had some thoughts about the kind of broader context and impact and implications of this.

 

Paul Gaspar  18:13

Yeah, look, I think, I think you kind of alluded to this earlier, the example that I went through was much more of a kind of an example that we were focused on predictive AI. Right. So there is this new and, you know, if if you haven’t had your head in the sand, you know, prevalent discussion going on around generative AI, right, which is, is actually, I think, a bit different, where, what you’re doing is you’re kind of taking, you know, existing knowledge bases and training models to generate content that is most from a probabilistic standpoint, most likely to be connected to related to, or relevant to a query that an end user puts in. Right. So this is sort of the basis of the foundational models that are being used by open AI, perplexity and others to, you know, kind of create what is sort of a very new and interesting paradigm and use case for predictive AI, or excuse me for generative AI. And so, you know, what I think this does, is it’s sort of a very different kind of evolutionary phase. And it’s very exciting to everyone who experiments with generative AI because of the accessibility, the power, you know, of the tools, even in the earliest stages of its development, considering how intuitive it is because you’re simply just sort of chatting with a model, the speed and the relatively low cost of experimentation. Shouldn’t with these models, and the new range of capabilities that these tools can deliver, it’s hard not to get excited about how these mechanisms can transform almost any business process that they’re going to touch in the future. So to get a bit more specific, you know, for those who’ve been following generative AI tools, you’ve sort of seen in the last several months how they’ve moved, become multimodal. And what I mean by that is the ability to take not just kind of textual chatting type of an input, but also to take voice inputs, or image inputs or video inputs, and to kind of respond to queries about that type of content. And if you think about kind of these various types of pools of information, that are now kind of like, you know, kind of fairly seamlessly collected in this in this kind of, you know, kind of mobile first world that we have, you can think about an incredible range of possibilities that are enabled, even in the example that we started this conversation with auto claims, right. So, in this example, we saw that one of the most challenging aspects of the claims is figuring out accurate costs. And when you think about the forms of information that can be brought now to this evaluation process, whether it’s video or pictures, or, you know, kind of voice notes, you could easily see how one could, you know, by kind of incorporating this information, you know, being able to take relevant pieces, and building kind of some know how that is embedded within the professional staff that you have, how the desk the estimation process could change, not just for the small kind of low value subset of claims, but increasingly move higher into kind of not only value but also sophistication of, of the claims that are there. So, again, I think that, you know, this type of multimodal input, the ability to access access, not only from a, a, you know, a practitioner standpoint, but also from an end user standpoint, the ease with which it’s, you’re able to kind of interact with and provide information to these tools is really quite the key, the game changer that is getting everyone very excited about these particular vehicles and in the future.

 

Will Bachman  22:28

Excellent. Paul, for listeners that want to follow up with you, where would you point them online? Ya

 

Paul Gaspar  22:36

know, you can kind of look me up on on LinkedIn, I’m very accessible via they’re reachable at Paul Gaspar PAULG, a SP AR.

 

Will Bachman  22:49

Fantastic. Paul has been fantastic. You talked about how it’s not just about coming up with the algorithm, but much more than we realize. There’s all this change management to get a corporation to do things in a new way to use these AI tools. And that was a real valuable for me to hear that. Thank you so much for joining today. It was a great discussion. Thanks.

 

Paul Gaspar  23:13

Well, it’s pleasure to be here.

Related Episodes

Episode
569

Automating Tax Accounting for Solopreneurs

Ran Harpaz

Episode
568

Integrating AI into a 100-year-old Media Business

Salah Zalatimo

Episode
567

Author of Second Act, on The Secrets of Late Bloomers

Henry Oliver

Episode
566

Third Party Risk Management and Cyber Security

Craig Callé