- Phil Bellaria -AI Short
Will Bachman, Phil Bellaria
Will Bachman 00:02
Hello, and welcome to Unleashed. I’m your host will Bachman. And this is one of a series of short episodes we’re doing with AI case studies. I’m happy to welcome our guest today, Phil. Baleria. Phil, welcome to the show.
Phil Bellaria 00:17
Thank you. Well, it’s great to be here.
Will Bachman 00:20
So, Phil, tell us about the case example that you’re going to share with us. Sure,
Phil Bellaria 00:26
I have a great case example of how we tried to build a chat GPT the old fashioned way, and how much easier life is now that these open source, large language models have been released. Alright,
Will Bachman 00:44
so first, what was the situation just sort of sanitized description of the client and the situation that you faced?
Phil Bellaria 00:50
Sure the client was a large telecommunications company. They had an immense amount of unstructured data, essentially, feedback from customers feedback from employees through surveys, accumulated over years and years and years, hundreds of 1000s of examples. Every year, plus, they had transcript transcripts from millions of phone conversations, and text chats with customers.
Will Bachman 01:21
Okay, and what was, what was the problem statement, what we were trying to accomplish? Right,
Phil Bellaria 01:25
that you had the problem statement was? How do we derive any kind of insights and understand in a scalable way, what our customers and our employees are telling us about the state of our business, about what we’re doing well, and what we’re not doing well, and how do we identify sort of trends and topics? Before or as quickly as possible?
Will Bachman 01:51
Okay, and like, roughly what time period was this that you were doing? This?
Phil Bellaria 01:56
We did this in? 2018? Through 2020.
Will Bachman 02:00
Okay, so this is pre chatty PT, so Okay, so we got the setup, trying to get insights from this massive, massive, unstructured data. How did you proceed? Walk us through what you did?
Phil Bellaria 02:14
Yeah, great, thanks, we had a super smart data scientist. And he had sort of on his own, kept up to date with recent publications, one of which was Google made public and released, what they call Bert, which is a methodology for natural language processing, that allows you to look at large bodies of texts, and pull words and phrases out in the context of everything that surrounds those words or phrases. And so, within our team, we were able to code essentially, an algorithm that was able to first identify topics and classifiers from this rich body of unstructured data. Second, then, kind of score each topic and each phrase on sentiment. So was this a positive expression from a customer or an employee? Or is it a negative one in the context of everything surrounding it? And then the third, within each of these topics, create a short summary of essentially what the customers or employees were saying that was related to that topic?
Will Bachman 03:40
And then, tell me how that, you know, walk me through the steps of that, and then how you started using the insights.
Phil Bellaria 03:50
So as you can imagine, trying to build this and run it on all the data that I that I mentioned, is very processing intensive. And so the first step was testing iterating, the model on smaller samples of data. So we started first with employee surveys. So you know, the company at the time, had HR employees, read all the verbatim comments. And so it was a very time consuming process, and also kind of rife with subjectivity in terms of how they classified those. And so we hold these into the test model because it was a smaller sample of data, we were able to process that come up with the topics, the sentiment, then we reviewed it with HR business partners in this case and with business leaders, to make sure we were accurately capturing it. We also took this model itself was trained on all the information in Wikipedia. But we took other information and words and language that was more specific to this company, and put that into the model so that we can refine it. And so it was a very iterative approach. But over a period of about six to eight months, we all felt confident that on the sort of BI monthly surveys, what we were spitting out, essentially, in terms of topics, sentiment summaries, and so forth was a very accurate representation of what the employees were actually saying.
Will Bachman 05:33
And then, how did you start using that? Or actually drawing insights from that, and, and taking action on it, and so forth?
Phil Bellaria 05:43
Yeah. Great, great question. So then, then we went over to a business application. And then like, like most things, it’s important to start with a question or problem you’re trying to solve. And so in this case, the problem we were trying to solve is, why weren’t sales agents on the phone or through text pitching a particular strategic product? And when they were, how were they pitching it in a way that made it more likely to sell that product versus less likely? And so again, with that particular use case, we were able to take a sample of data, only that data that were the agents actually, or the customer mentioned that particular product, and then kind of feed it through the algorithm, and determine what were the words and phrases and sentiment associated with successful sales versus non successful sales. And then, with that information, go back with folks, partners in the sales channels, to train the sales agents on kind of the right expressions and words and context to use when pitching that particular product.
Will Bachman 06:52
That is amazing. So you were actually able to across millions of conversations, figure out the right phrases or phrases that seem to then work well, or phrases that seem to lead to not a sale.
Phil Bellaria 07:08
Yeah, in fact, we even called it phrases that work
Will Bachman 07:10
phrases that work well. Again, you know, keeping it sanitized, can you share any of those phrases that worked? And what were some that like, did not work?
Phil Bellaria 07:22
Well, it was really combinations of things. And you know, it’s funny, they all seem intuitive, but Right, if you’re selling, for example, a streaming product, then you want to associate it with the popular shows on that particular streaming product, or you want to associate it with kind of things like, you know, watch on your schedule, and things like that. So it’s funny, a lot of times with these scenes, what comes back, you know, you’re like, Oh, that makes a lot of sense, which is validating, but also provides a lot of data and support behind sort of pushing what maybe your intuition would have already told you.
Will Bachman 08:04
I suppose you could also detect the the most common objections and so forth and figure out ways to overcome those objections that were like, what worked? What didn’t, so forth. Right, exactly. Wow, that’s powerful. Now, have you do you know, if just with the recent advances, and chat GPT, and so forth, are this? Is that sort of effort now easier than it was when you started in? 2018?
Phil Bellaria 08:35
Absolutely, I mean, obviously, some of it depends on what kind of arrangement you have with whoever the provider of the large language model is, you know, open AI, or whomever and how you’re securing and protecting your data. So I think the one trade off now is if you haven’t built this model yourself, right now, you’re now you’re paying for the API, and you’re paying for the processing, and all that all that kind of stuff. Whereas in this particular case, example that I gave you, we had all the data on, on premise, with processing capacity through that means, so you know, it can be it’s far simpler and quicker, but relatively the case example I get could could get, you know, fairly expensive.
Will Bachman 09:27
Awesome. Now, I know that you’ve done a number of AI related projects, both as a consultant and in this exact Do you want to share one other case example? Sure.
Phil Bellaria 09:36
I have like a good sort of classic AI example sort of, we can all remember, you know, pre November 2023. When chat GBT was launched and we cared about driving business value through AI algorithms. And this is, again, a sales channel application where you Have we implemented what I would call sort of a next best action engine. So the next best action engine just takes the information that we know about a customer’s risk customers value, the products and services they have with us how they’ve engaged with us in the past, and recommends the next best action to take or that customer, it could be to recommend some sort of product they have today that they haven’t engaged with recently, it could be a new product to buy, it could be an offer to keep them on an existing product that they’re likely to remove, and so forth. And then you know, that those, that AI sort of engine has a reinforcement learning component to it. And so it takes the feedback from those interactions and those recommendations that the agent provides, and awkward the outcome of the call, and feeds that back into the algorithm to sort of do reinforcement learning and continue to improve the power of the recommendations.
Will Bachman 11:08
Can you tell me what the process was like to develop that, that tool that next best action engine?
Phil Bellaria 11:14
Sure. I mean, like many things, especially in a larger corporate environment, the biggest challenge is integration with existing systems, and getting sort of timely enough responses to match the needs of an operational environment. And so really, a lot of the hard work was on to recall the classic it side. So how do you build the right API’s into existing systems that sales agents are using this context? And then how do you make sure your sort of SLA is in terms of responsiveness on the algorithm, it’s fast enough. So those were a lot of the efforts, the the algorithm itself, you know, use some sophisticated techniques, and I’ll be sort of way over my skis, if I try and try and talk to but those are classic statistical modeling techniques. It’s really the challenge was more, like I said, sort of on integration and timeliness.
Will Bachman 12:23
So it would look so this would be real time. So the agent would be on the phone with someone, and they would be discussing a particular situation and the AI in the background there would be looking at, you know, what, that what that customer currently has, and be making suggestions to the agent. Right, and I worked?
Phil Bellaria 12:44
Yeah, much of that was sort of priests, what I would call kind of pre scored on a nightly basis, at the most frequent. And then the AI algorithm would take those sort of pre scores, and then update anything with context from the call itself, provided in the interaction with the customer. And so that helped quite a bit with, like I said, sort of the responsiveness and the processing time, and so forth, because much of the information was already sort of pre scored. And for that given customer in these picker scenarios, the next best actions were already predetermined. It, the only thing that was updated was the context from the call itself. So for example, if I go to the IVR, and I say, I want to disconnect, right, that’s obviously important context, to have, as part of the as part of the next best action recommendation, or if a customer had called earlier today, and then called back again, that’s good context that you obviously want to integrate in.
Will Bachman 13:52
What were some of the lessons learned that you had going through that? I mean, I imagine when it first started, would it sometimes suggest stuff? That was not a very smart thing? Like, like a totally? Yeah, that’d be
Phil Bellaria 14:06
like most things, the biggest challenge is change management, change management on the agent side, right? So convincing them just want to trust the machine and then measure their usage of it and provide the right sort of incentives for them to, to use the machine and to provide feedback to it through the context of the engagement. And so that was really, you know, aside from what I had mentioned earlier, around integrations and so forth, the biggest the biggest challenge was getting getting buy in and usage from the from the frontline population.
Will Bachman 14:40
And how would you do that? So let’s say you have an agent who’s an experienced agent, and yeah, so they’ve had 1000s of repetitions. They get a customer, the system is recommending, oh, pitch this offer, but the agent is thinking that’s a dumb idea. I know this person is not going to take this offer. It doesn’t make any sense. Would you, like insist that the agent offer it? Or would you give people, you know, leeway to override the system? Talk to me about that change management piece?
Phil Bellaria 15:10
Yeah, that’s great question we, we never forced anybody to take the recommendation, usually directly, there is sort of a, you know, best recommendation second best or best. And so there are a couple ways that we handle that one was through providing options, not just one, two was, you know, the ability to do something outside of the system. And then the way we handle that is, over time, as you build enough data, you can demonstrate, well, here’s the outcome of the kinds of calls where people go outside of the system, or where people choose the second best option versus the third best versus the first best, versus the outcome when they choose the first best, right? And you can demonstrate, as long as again, your incentives are aligned, right. And in most cases they are so these agents are paid, you know, for outcome of the call. And so if you can demonstrate with the data that this will lead to better outcomes for them, and therefore, better pay, then that helps a lot with with adoption. And then obviously, if you can’t, then you got you got a problem with your algorithm. And and you need to adjust it and learn from it anyway. So we tried to take a sort of frontline agent centric approach to how we rolled it out.
Will Bachman 16:27
For someone who is not technical, a consultant who hasn’t dealt with this sort of AI machine learning tool before, what are some lessons learned or vocabulary terms or tips that you have on how to, you know, communicate? What you’re looking for with the technical people? Is tell us a little bit about that?
Phil Bellaria 16:49
Yeah, that’s a great question. Well, I mean, first of all, as always start with the business problem. And generally, through my interactions, I’m not a technical person, either. So I find I have the most success, when I’m able to very clearly describe what the business problem is we’re trying to solve with the technical team, and get them involved as early in that process as possible. And then I then I find, I spend more of my time figuring out how to translate from a problem that we’re trying to solve a business problem that we’re trying to solve into some what I would call application, so technical applications, and, and then let the technical folks kind of go wild with all the amazing coding and techniques and so forth, that they can use to meet the needs of those applications and ultimately solve the business problem. Where I get myself in trouble is if I if I try to say, well, you know, why wouldn’t you use this particular algorithm? Or come in with a solution mindset, as opposed to, you know, defining the problem and asking the right questions.
Will Bachman 18:11
Give us a sense of timeline to set real realistic expectations, if you’re trying to get something like this started, you know, get some kind of recommendation algorithm for a call center. What’s the major chunks of work? I mean, you mentioned that just getting access to the API’s of the system is sometimes harder than actually the algorithm itself. Give us a sense of the those timelines.
Phil Bellaria 18:33
Yeah, in this context, it was a multi year timeframe. And some of that was because of the size of the company and the complexity of the operations and the back the back end systems. So that’s, you know, I would say, on the long end of that scale, but But generally, you know, as I mentioned, the writing of the pulling in the data, creating the data environment, that doing the scoring, the algorithms and so forth, that’s maybe a six month effort, you can do that in parallel with building the, the API’s, and so forth, and sort of testing that is another kind of two to two to three months. And then as I mentioned, the, the big thing is to change management and and in that case, you really need to run limited pilots, you know, with controlled sort of AB test across your population. And so all that again, to get enough time for statistically significant results in meaningful insights, and then integrate and so forth, takes another few months. Right. And then when you roll it out, he’s typically roll it out and in phases, you know, again, because of the change management and because you know, first do no harm, you don’t want to, you don’t want to wreck the business. So that’s what kind of ultimately accumulates to the multi year horizon.
Will Bachman 19:55
What sort of impact did you see these next best action recommend? nations? Were they significantly better than what people’s intuition would have told them? Did you see a improvement in the business?
Phil Bellaria 20:07
Yeah, cute human Aliy it was on, it’s on the orders of hundreds of millions of dollars. And so like you like you alluded to a little bit earlier, it’s certainly not better 100% of the time, but part of the power, is it because you’re getting insights and interactions that you never had insights into before you learn from the really good agents. And so the real power is bringing, kind of bringing the new agents add up to speed much faster. And then elevating the performance of the sort of middle tier, right, the bell and the bell curve. Often, the lower performing agents, you know, they are they’re also the ones often that wouldn’t use the tool and so forth. And so we didn’t really see as much of an impact from them as we did from the kind of the middle of the bell curve.
Will Bachman 21:03
Amazing. Phil, for listeners that want to find out more about your practice. Tell us about your firm your website if you want to share one and where people can find you.
Phil Bellaria 21:15
Yeah, absolutely. Thanks for asking. I’m at CDA o partners.com. And we help mid market Telecom, media entertainment sports companies extract value from their data.
Will Bachman 21:32
Amazing. What is the A C D H O stands for?
Phil Bellaria 21:37
Chief Data and Analytics officer. So one of our services is fractional kind of Chief Data and Analytics officer support. Fantastic.
Will Bachman 21:45
We will include those links in the show notes. Phil, thank you so much for joining today. Thank
Phil Bellaria 21:51
you for inviting me. Well, it’s been a lot of fun.