Zum Hauptinhalt wechseln

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 48:12

PegaWorld iNspire 2024: Optimize Your Enterprise with Pega Process AI

Are you ready to maximize business outcomes across your organization? By leveraging the power of AI and automation, you can do just that. Join this session to learn how Pega Process AI helps monitor, optimize, and automate your processes in real time. You’ll hear from Pega experts who will share best practices on Process AI to help your organization achieve operational excellence, customer satisfaction, and a competitive advantage. You’ll also see a live demo of Process AI in action and learn firsthand how you can apply it within your own organization.

Welcome to PegaWorld 24. I'm Pete Brown. I'm a director of product marketing at Pega, and I'm joined on stage by my good friend Andy, who's senior product director for Process AI. And we're going to talk about optimizing your enterprise with Pega Process AI. And so, were any of you in the previous session in this room with Andy covering the ten use cases for AI? Can I show your hands a couple? All right. Um, so what we're going to do today is dive a little deeper into how Process AI delivers a return to your business. I'm going to start with a quick overview of what Process AI is, and then I'll turn it over to Andy.

And we're going to go through some use cases and look at how they can deliver value to your organization. So starting with Process AI and what it is. I'm a marketer. I like to think in analogies. So I was thinking, what is a good analogy for what AI is doing for our knowledge workers today? And I think I may have been a little aggressive in my choice, but I thought about the wheel. Um, because if you think about what the wheel did for humans as a species when we invented it, it allowed us to expand our capacity to move things. It allowed us to more effectively get from place to place, to do more work more efficiently. Some of the greatest things that we have done as a species have come out of this development, and it's really about making our muscles have more capacity to move more things, to do more things.

And AI is much the same way. And for those of you who are here a little earlier, you might be familiar with our left brain right brain discussion, where if we think about our right brain, our creative side, that's the gene AI capabilities. We've gone through a lot of those today in the keynotes with Blueprint, Autopilot, Pembridge. All of these announcements that we're making around GenAI are really that creative, right brain centric. But Pega Process AI, that's your logical brain, and it's really built off of something that Pega's been doing for a long time, which is predictive and adaptive models, machine learning and natural language processing. We do we have any Customer Decision Hub clients in the room. Anybody using Customer Decision Hub show of hands? Don't be shy. Okay, well, we've got Process AI built on that same type of technology where Prediction Studio is the same prediction studio might be used to if you're using Customer Decision Hub.

And that brings together your analytics. And this is where we're going to focus for today is on this logical side of your brain and how that can be driving Additional ROI by driving capacity into your workforce, driving efficiency, and thereby, you know, both not only saving money, but allowing you opportunity to grow by doing more. And it starts with our Build for Change model. So we're familiar with the analyze, automate, and optimize function of what Pega does in terms of your delivery methodologies and things. Um, we talked a little bit in the last session about Pega Process Mining. It really complements Process AI really well because it's a find it, fix it kind of model. Process Mining will dig in and find, hey, this step is slowing you down or this step is heavily manual. Then you can go to Process AI and say, I want to automate that with a decision. And then, um, you know, optimizing that workflow so you can move through your working, uh, your, your cases in a more effective and efficient manner Using less time for each of your resources, thereby increasing capacity.

So where do we use Process AI in our cases? And this could be any case. You can apply the steps in stages to cases you work on a daily basis. But when you're kicking off a case event streaming is part of Process AI. So you might have a Kafka stream, for example, bringing data in with client details, what they're calling about, etc. and with natural language processing and an event stream, you can automate that case creation. In your next step, you might have another decision where you're classifying the case in terms of what type of case it is and where it's going to route and intelligently route it to the right team. And as you move on, you're constantly using things like predicting outcomes, which can include whether or not you're going to meet an SLA or are at risk, potentially to pay a fine for missing an SLA and work that through your model. And they're constantly feeding back and learning about the different outcomes, the different steps and stages, what teams are optimal to work on things, what their workloads are like so you can guide people to make better, more effective and efficient decisions.

And we do it wrapped up in a lot of familiar technology. The decision wrapper. You might be familiar that from Customer Decision Hub, but it allows you to test and hot swap your decisioning strategies within that case to say, well, maybe I want to predict that there's an abandonment on this case. The customer is going to go away, or perhaps predicting the outcome is more effective. So you can swap models in and out. You can have advanced decisioning. So combine your deterministic and non-deterministic models together into one decision strategy. You can integrate with the other models. So if you're in a business where you have fraud models that are externally managed by your data science team.

With APIs, you can pull those in. You can triage events as they come in. I mentioned that earlier, and I'm going to skip over to Decisioning Lead ops two, because that allows you to inject your AI into your cases with a well governed model. So that's important because you don't have to build a case from scratch. Wait for your model to learn. You can use existing case data to train your model and get up and running with this much quicker, and allows you to do it at scale, across case data, across your event streams and third party applications, which is great. Um, for the quickest results, we find that most clients they start initially by training on case data because it's there, there aren't any hoops to jump through in terms of integration. It's easy. So what does it look like in a workflow?

You can see we have a process here. And we have um, things moving through the decisioning and Lead the user history, telling the AI about what it finds in there. So in terms of traditional process improvement, you're identifying patterns, analyzing and making changes. Whereas with AI, it's looking across all the dimensions of those cases to determine, you know, which cases should be resolved automatically. And we can move those through that auto resolve queue to complete much quicker. It's asking which work would queue would resolve it best. So this is thinking about intelligent routing to different teams to say, hey, I've got a specialist team for this particular customer inquiry, I'm going to move it to that specialist team and have them attack it and get it done quicker. Um, is it going to make the SLA? And this is one of my favorite.

We have a number of clients using this. I'll tell you a couple of stories in a minute about this. But the SLAs, there are a lot of different ways to think about them. You know, there can be penalties if you miss it. In some cases, you may have a contract that if you exceed it, you have to meet that new, higher level of performance the next year. So if you're managing to those, being able to optimize your delivery times is really important. And then finally abandonment. That's great from customer service. It applies a lot of times for some back office processes too.

So as we're thinking about our workflows, I like to think about it in terms of business rules and AI. They're complementary, but we have built a lot of business rules over the years that are very much if this then that they're black and white, they don't evolve with Process AI you're adding that evolution into how your business is changing over time. So you're not having to rewrite business rules and maybe pulling on a thread that does a little bit more to your application in terms of the rules impact than you might have anticipated. So talking with a couple stories, and then I'll turn it on over to Andy to go into some of the deeper details of how this provides value. We're working with a US government agency that has inbound reports from multiple channels that need to be triaged and routed to the right people. And they've taken these reports, and it was a manual process for a small team of people to weed through these and find out, okay, maybe this report is just somebody having a little fun versus this is when we should look deeper into this should be routed there. Well, they've applied Process AI to the intake process from an event streaming standpoint, as well as added the decisioning in terms of where to route it. And we can't really quote exact numbers for the client's privacy, but it was a meaningful amount of manual work that they eliminated on top of it. The cases when they came in were classified.

Um, 15% better, which means 15% of those cases improved in terms of getting them to the right cue, to the right people. And 30% of the cases that came in were auto resolved, so they didn't even have to touch them. So that really eliminated where the amount of time people were spending on these cases and improved accuracy. The next was a technology firm, and they received thousands of documents a day related to their legal department. Now, these could be anything from affidavits to any other discovery documents coming in, and they needed to analyze it and route it to the right team. And the result was they increased the case processing capacity within their test window before making a decision to roll out. They looked at a particular region over six months, and it was a three times increase in the capacity of being able to triage and route these documents. So you can see the scale there just for a short period of time. And finally, we'll talk about US healthcare player payer.

This is where the SLA stories I keep going back to come in. They're doing a million predictions a day. So if you're thinking about scalability and things, they happen to run on Pega Cloud. But this allows decisioning to scale up massively. And that's where that capacity increase can come from. Um, but they're looking at their pending claims across six states where they have very strict SLAs that must be met. Otherwise there's significant penalty. So comparing Process AI versus their existing process, which is very human driven by augmenting the humans with process AI, they showed a substantial reduction in late payments to these six states and eliminated a significant amount of penalties paid. So that drove a really nice financial return.

But also, as an added benefit, it increased the capacity of their claims processing Assessing team to be able to get work done and it eliminated overtime. So from another standpoint of value, you've got happier employees because they're going home on time. And with that, Andy, I'm going to turn this over to you to go deeper into these use cases. All right. Thanks, Pete. So Pete took us through what Process AI is and kind of how it works. I'm going to help us go through and provide a little more quantification to what the benefits you can achieve with Process AI are. And if there's one thing I'd like you to take away from PegaWorld, at least from this session after the Black Pumas at night, after we're unwinding, after a great day two tomorrow night is think big when it comes to applied analytics. It's been interesting to us because, as you know, the Generative AI side of the house is pretty new.

But the the traditional AI that we're talking about as part of Process AI is, is tried and true. And as some of our clients have started to adopt it. The ROI has been astounding, so astounding that they questioned the results and it actually slowed things down. So, you know, the benefits here are potentially enormous for your organization. So it's definitely worth a look if this is something that you're not not doing today. Now before I move on just by a show of hands, are there any data scientists in the room? Okay. Fantastic. Welcome.

Uh, what about people who are charged to get artificial intelligence into their business processes and into your organization? Okay, that's that's about what I expected. So, uh, which is and so and that's really good because hopefully this will speak to everybody here. So as we dive into the benefits of using Process AI, we're going to look at it in two different dimensions. The first dimension is time to value or time to production, time to payback and whatnot. And the second is ongoing return on investment. now to do this for the time to value. We're just going to use kind of some conceptual examples. Uh, when we get into the ongoing ROI, we're going to do math.

So if you fall asleep I don't take any offense. And if I say something wrong, please help keep my math accurate because I haven't done that for a long time. So anyway, moving right along. So, uh, for for the folks outside of data science community outside of it, you're probably not aware, but there's a process for getting a brand new predictive model into production called model management. And model management traditionally involves a lot of different steps and can take quite a long time. Now, just to kind of break it down a little bit. Uh, you know, everybody's process is going to vary some, but there's typically four key steps. The first is getting all the data together. Um, the data science in the room will tell us that typically that is a very long pole.

Initially, at least for the initial generations of models, the development of the model, there are lots of tools out there that can make that actually come pretty quick. But there's always going to be, you know, tuning and then being able to investing time to make sure that that model can be put into the target system that you, that you're focusing on. And then there's the whole planning review cycle and then the actual deployment and then ongoing monitoring. Now things have gotten a little bit better over time, but for a lot of organizations and a lot of industries, getting the initial model into production from the time it's been created all the way into production in your systems can take on the order of weeks and months, and even to update that over time. It's done is typically done relatively infrequently on the order of months or quarters, again, because there's a lot of effort involved. And the reason for this is there's just friction in our organizations, right? You know, there's data silos, there's legacy systems. There's a lot of hurdles that you have to overcome with this. Now, one important sidebar here is the review process.

Now if you're depending upon what industry you're in. Or what type of data you're using, that could be much higher or lower. And if you're starting to use personally identifiable information or sensitive data that can make the review process take a little bit longer. One of the cool things with Process AI is that oftentimes that may not be needed. So when you go home and you're thinking about putting this to work for you, I would take a look at the data that's required and make a decision around, well, can I get a good model that doesn't involve any sensitive information? Or do I know that it might be a little harder to get this through my governance process by using more sensitive information that's also available to you? Make sense? Okay, great. So this is this chart is shows us what a performance for a traditional predictive model would look like over time.

On the y axis we have performance. On the x axis we have time. And as you can see from the time that a model is ready to the time that a model is deployed, the performance is, you know, the performance can decay over time. So this is um, you know, this is kind of a double whammy, right? Because one, I'm not able to take advantage of the better performance of that fresh model that's available at time zero, you know, but two, just to add insult to injury, by the time that model gets into production, typically there's a new CHAMPVA model that's out there waiting in the wings. So you're losing some there's some cost involved here, and you're also taking a performance hit from an analytics perspective. Now let's contrast traditional model management with what we call adaptive model management. At Pega we have models called adaptive models. The CDH folks in the back of the room are familiar with these, and what they are is they are self-contained, self-learning models that that operate in real time.

Now, the way a traditional model works, uh, it's going to the math is calculated, the model is put into production and it's generating a score code. And that code is providing you with answers in real time. What's not happening is that that model is not being updated in real time with adaptive models, the models automatically being updated every two hours. So you're having you have a fresh model that is making sure that the performance is always going to be there. Now, from a deployment perspective, you also get a lot of benefit. So the time to value is much, much shorter using adaptive model management because it's all part of a closed loop system. That's part of Pega. Uh, all the data capture the the initial model creation, you know, even the review steps can all be managed with in Pega and literally within a matter of minutes if you don't believe me. Afterwards, when you have time down the Innovation Hub, you go check out the Production studio booth to see how to see how it works.

Or go to the Process AI booth and we can take you through it. Now, of course, in real life it's not going to take minutes. It's going to take more time because there will be a review process. There's going to be a governance process. Um, you know, data science team will be involved and, um, you'll want to do some fine tuning. So there there is, you know, you there will be more time involved. But instead of looking at months and weeks, we're looking at, you know, you know, at most weeks but typically days to get this into production. I saw a question in the back of the room. Um, one, one point if we'll take questions at the end and when we when we do, if you can come up to the mic.

But for right now, I don't want to make you come up front. Well, if you can, I can just repeat the question. So. Adaptive modularity that learns through architecture. So I'll get to that. Okay. So the um, so we remember the sawtooth that we looked at for the traditional predictive model. The way the adaptive model works is that initially, you know, instead of having to wait to go through that long deployment period, the model just starts right away. And during the learning period of that model, it's being updated every five minutes.

So every five minutes the model itself is changing. Now once it's achieved its level of desired level of performance, the default setting is that you would update the model itself every every two hours. It can be more frequent or less frequent. And you know, two hours is just kind of for most applications, probably a conservative, uh, interval for it to be updated and guaranteed as being fresh. But whether it's two hours, two minutes, 20 hours. It's still much more frequent than, you know, one once a month and once a quarter. So. So the other point that's important to to call out here is that, you know, typically a traditional predictive model is going to perform better than an adaptive model when it's first created. But again, because the performance of that model is declining over time, sometimes the adaptive models will do better, and you have the benefit of it being easier to to create and you're less costly to deploy.

So, you know, the kind of the the key point here is that, you know, sometimes fresh beats, uh, you know, beats a traditional predictive model, even if it's not initially quite as performant. Make sense? Okay, great. Uh, I'll tell you what. I'll pause for a second. Are there any questions or anything else before we move on? Okay, so we're shifting gears a little bit. We've been talking about adaptive models. And I know, some folks are familiar with them.

If you're not, I would like to learn more. Please come downstairs to the Innovation Hub so we can dive into it a little deeper. But we're going to shift gears and talk about how easy it is to deploy other types of models within Pega. So Pega with Pega, you can use a third party model of your choice that's written in Python, written in SaaS, written in R, and import that we can tie to cloud providers like AWS and Google. But Pega also provides predictive models and natural language processing. Natural language processing, the acronym for it is NLP. And what that's doing is it's using machine learning to evaluate text. That text could have been translated from voice in real time, but it's it's interpreting that text to provide information about what's what it's evaluating. And we're going to put this to work a little bit later with some of our examples.

But just for right now I want to show how easy this is for for you to do so. If you're familiar with Pega, this should look familiar to you, right? We have a claim process, and I'm going to step over here so I can see this with you guys. As you can see, there's an opportunity for us to apply a model to help with fraud evaluation. But here what we're doing is we're going to take an existing variable, the accident category. This is an auto claims example. And we're going to create a natural language processing model behind it. So that we can use these fields as a bodily injury case or property damage case and use that for routing the case so a person doesn't have to. And this is it.

This is quick. So I check the box that says use AI I point it to the description. This is the description of the case. I make sure that the confidence interval, which is just, you know, the quality of the output of the model and it's done right now we show that this is this field is being predicted. We save that and go over to Prediction Studio. Now here we have a prediction. When we open it up we see that the target variables are here. So again property damage bodily injury uninsured user or something else. Um, and I haven't trained this model yet.

The natural language processing models require some training. But I have gone ahead and uploaded some examples. And you can kind of see what this looks like. I was using my leaf blower, and I, uh, blew something on my neighbor's car and it scratched the paint. Um, I was in some other form of accident. And this is the type of data. You know, this is all hypothetical, right? This is meant to be a simple example. It may not represent real life as well as we'd like, but all I have to do now is I build the model, and then I, as I enter information, uh, I can see what the output is.

And in this case, you know, I was, uh, I can't read that from here, but I was at a stop sign and it tells me the accident category was bodily injury with a confidence score. And it put in another example and it gives the same kind of output. So why is that important. This is important because now the claim processor is not having to read through that description. Right. It's being done automatically for them. They can look at the they stay they stay. They still may want to take a look at it. But you know and generally we think it's going to make their jobs a little bit easier.

Does that make sense. Okay. So now we have a new model with a with a variable that can be used as part of this case. And I can use that to auto route the cases to the right right teams within this claims case. All right. Um, is everybody still with us? I know this is maybe a little deeper than some of the sessions we have. Okay, great. So we're going to shift gears from the time to value and start talking about ROI.

And we have a little bit of an alphabet soup here. So we're not going to dive into financial accounting and talk about Ebit and all this crazy stuff. But we are going to use another form of accounting called activity based counting. And really at its core, all this is about is about understanding what the cost drivers are for various activities to use that to determine how much money I'm spending on certain things, and if I make to do an analysis of kind of a before or after. So if it if it takes me a certain amount of time and there's a certain rate involved and I'm using certain resources, I can do kind of a before or after. Now, something else that we've found to be helpful for, for our clients in terms of how to, to think about this, is to split this into two different swim lanes. And we call this the two E's efficiency and effectiveness. It's the Venn diagram indicates there is some overlap between these. But just kind of as a rule of thumb, efficiency is usually about reducing time or cost.

And the typical efficiency, um, your KPIs are some of the ones that we see here. again, there could be overlap in your definitions might be different, but effectiveness is typically more. Sometimes it's a little harder to quantify. It may not be quite as black and white, but typically it's looking at making bigger changes or different changes that aren't directly associated with reducing time and cost. Kind of a simple, you know, example definition for now. But hopefully this is going to be helpful because we want to look at both of these separately. So first up we're going to use a slightly different um claims case. Uh another just another example. But we're going to start with the efficiency side of the house.

So the time and cost reduction. So we have we've done an analysis. If you haven't seen Pega Process Mining already, that's a great tool for doing this type of analysis. It does it for you automatically. Um, but as part of the analysis, it shows that 12% of the cases that have been assigned require some form of rework. So to help quantify this we've made some assumptions. And again you know we've we know we know that these assumptions are, you know, are conservative or mainstream. And of course your your cases in your mileage will vary here. But hopefully this will be useful for us.

And I'm sorry, we should have said that math is required for this session on the on the advertisements for it. But, um, our assumptions are pretty simple. Uh, assignment step takes five minutes. If it's reworked, it takes eight minutes. And we're looking at a person who earns $45,000 a year, loaded about 60,000. You know, we won't go through the math, but that works out to about $0.50 a minute. Now, you know, what that means is that with no rework, in a perfect world, uh, and and that 12% was 0%, uh, this activity would cost $2,500 for every 1000 cases processed now with rework. We're having to do process. 120 cases at eight minutes.

At times $0.50. Our cost goes up to about $3,000. So you know, this is this is material, right? And I think our brains are wired to think it would be great if I can reduce, get rid of that rework part and save that extra $500 and everybody's happy, right. And just to kind of do a gut check. Is everybody still with me here? Are we tracking? Okay, good. I know it's after lunch.

Um, so now let's put Process AI to work. Now, remember that natural language processing example we showed. So instead of the person having to read the description, which our examples are really short, but typically they're going to be much more longer, much longer and much more detailed. Um, so now the initial step, instead of taking five minutes, only takes two minutes. Same thing with the rework step. Um, it's reduced by about three minutes. So we've reduced the time for these steps. Uh, you know, pretty meaningfully now the natural language processing models and these are these are good numbers, right? A well-trained natural language processing model for this type of use case, you should expect to see well north of 90% accuracy.

So we've assumed that the accuracy of the model itself is 96%. And of course it's being paired with the actual person that's using it. So the accuracy is going to be higher because he or she is able to add their own knowledge on onto it, and the overall accuracy is 9,098%. What does this mean? It means that now, instead of 120 claims having to be reworked for every 1000, only 20 are right. So that's just 100 cases. What's the big deal? Well, the big deal is that the bottom right, because we've reduced the processing time so significantly. Instead of $3,000 for every case, 1000 cases, we've reduced this by two thirds.

So we're down to $1,000 per every 1000 cases. So this is a material increase in improvement from an efficiency perspective. So and again, this is the kind of thing that, you know, whether it's 60% or 40% or 30%, you know, these are there are some big numbers out there to be had. So, um, I'll pause for a second and see if there's any questions before we shift gears. Yes, ma'am. So this is really easy to inject into an existing application. I think that's one of the really the beauty of Process AI. And the way we do that is if we go and I'm sorry, I'm not going to go back to the to the demo, but you can check it out downstairs at the Innovation Hub, you'll recall that there was that variable that was just a drop down list or pick list, and we clicked predict with AI that became that automatically turned that variable into a predicted field. And once that's predicted, I could do anything I want to with that that variable.

Going forward in this example, we're using it to do is to be part of our routing logic for as part of this case. So we're saying that if accident category equals bodily injury you go to the bodily injury specialist. If this accident category is uninsured, go to the go to the right team. You internally we use this for things like bug routing and case routing. It's it's really tough problem because we have so many different parts of the of the product. But for something where you've got, you're trying to get to the right team or the right place and you've got, you know, you know, ones or tens of choices, it's a really pretty well good fitting application. And if I can jump into part of it will depend on the version of Pega you're running. Um, so with each release, of course, we add more prediction types and it gets more valuable. So ideally you're going to want to be on 2401.

But Process AI has been available since eight six. We'd love to say see you on eight, 8 or 23. To really get the value out of what Process AI can offer and get that support level you need. And the way I've described this has been available since eight eight, so it's been out there for a few years. Yes, ma'am. I think the examples and. The Galaxy II predictive AI to the normal standard of care of the RUDI RUDI, I think. Okay, so the comment was that the existing routing feature that's available within Pega does pretty well. Yeah it's straightforward to do this.

And so where where this where Process AI would still help in that scenario is that it can reduce any kind of your review time that's needed to determine how to, you know, where the right place to route it is. Um, I would like to better understand your point. So let's either see we'll see each other right after this, or I'll meet you down the Innovation Hub. The same question because, um. The difference between another feature and how Process AI is enhancing that feature. Um, because you and I. Right now. Okay. Yeah.

So so the so the question is what's the difference between how I normally route things and what Process AI adds to this? So the assumption and I may not have stated it explicitly, is that in this case there a person is doing a review and then determining a routing so that the person is, you know, as part of their screen. They, they they see a description, uh, they read the description and then they say, oh, this is bodily injury. And then they routed to and then it gets routed to the right team so that that part works without Process AI. What Process AI does, is it it instead of that person having to read it, it says this case is bodily injury case. I'm 96% confident of it. You know, the person would still probably spend some time to skim the information. And, you know, because the model is accurate, would just go ahead and click and get it to the right place. So the so one benefit is that person's time is reduced pretty substantially.

But it's using that same routing feature that's been part of Pega for a long time. The second part is that, you know, we're assuming that the model plus the person does a better job of routing it to the right place. Right. Because we've seen, you know, Pete shared an example earlier where we saw that they were using core Pega routing to get things to the right place, but they were making mistakes and stuff was getting hung up. So that's that's it for today. If you select the actual accident. That's right. Yeah. I um so I showed I had some training data loaded up.

So the model was taught, uh, I don't and once the model is taught, it just kind of runs automatically. Um, what I would like to do, because I do have another whole section I want to cover, is to hold off on the questions for the time being. We did get started a little bit late because lunch ran a little bit long. I'll move on and then we'll come back and I'm happy to answer any questions we have after this. But yeah, thank you for the questions are great. All right. So we're going to shift gears from efficiency where we reduce the cost by reducing the time it takes to evaluate each individual case. So that was happening in real time. And we made the routing more accurate, which also reduced our our our investment more.

So we're going to shift gears and look at the effectiveness side of the house. And we're making a couple of assumptions here. So first with current methods this organization is capturing 50% of fraud that is introduced into their systems. And the fraud rate within this industry is 8%. Now depending on your industry, your fraud rate, you know, we know that in some places it could be substantially higher. It may be a little bit lower, but this is just meant to be for illustration. So the other assumption that we're going to make here is that the average claim amount here in this industry is about $500. We're going to ignore the efficiency elements of it. But just like with the prior example, those same types of gains would be made.

It's worth pointing out that a fraud review person might be more expensive than the person that's doing the routing work, so there could be some significant savings to be had there. And if I have a high degree of confidence that a case or a claim in this case is, um, is not fraudulent, I might be much more willing to straight through processes so I can save even more time and increase my processing speed. So now let's look at applying Process AI. And in this case we're going to use a real time fraud prediction. This would your organization probably already has models in place. You can also plug in churn or other types of models for the same kind of pattern of of application. But we're saying that the real time fraud prediction model is going to be 85% accurate. Now, in real life, chances are you're going to have several models that are working. One might screen and flag for fraud.

You might have another model that's assisting a fraud investigator to do to do their job. But to keep things simple, we're just saying one model, you know, kind of not not a huge increase. So again, I think our brains are wired to think, well, what's the big deal? Now every 1000 cases, instead of catching 40 of those 80 fraudulent cases, I'm capturing 85%. So it's just it's 68. So it's only 28 more. But we forget to think that if the average value of average cost of fraud is $500 per fraudulent case, what this means is that I'm capturing $14,000 worth of fraud every thousand cases that I was missing before. Does that make sense? Okay.

And this is not meant to say this is going to be how it works for your organization. It's just meant to provide kind of an example that you can try to mirror you when you go home to see how this might work for you. You know, we talked about some of the real life elements of this. Of course, it's going to be a little more complicated. This is just meant to be illustrative. So let's just kind of summarize and wrap things up. So before Process AI manual claims assignment with rework costs $3,000 for every 1000 cases, we were hitting 50% of the fraud cases after Process AI. I'll just boil it down. We've saved $16 a case just by applying these two models directly to an existing business process.

We've reduced the cost per case $2 from a assignment perspective, and we've reduced our fraud expense by $14 on average per case. So pretty substantial, um, you know, benefit to be had here. There's lots of other elements here. Now we did we used a synthetic data set with our Process Mining application to help provide us a little more context here. And what we saw is that this, this particular data set showed a reduction of about 25% in the average handling time. If you're in a business where your teams come in, you're not 24 over seven. There could be substantial queuing times just by being able to get things through more quickly. You can see a lot of a lot of other benefits as well. So things to take away.

This is not rocket science. The AI behind the math that we're talking about here is tried and true. It's been around for for years. It's been used by organizations like we heard from earlier today. T-Mobile as part of our Customer Decision Hub. But as you're. There's two key points I'll leave you with. First is, you know, start small. You know, pick pick the team that you know would do a good job with this.

Pick an area where you know you have a problem. Get get some early wins and then use that to, you know, grow your grow interest and grow demand for this throughout the rest of your organization. The second part and this is something that we're seeing from experience is make sure you get a good before picture, right. If you're going to go do something incredible, um, you want to make sure that you have a good, uh, a good level set to start from so you can quantify the results that you're seeing that much better afterwards. So again, the key point I want to leave you with after, you know, after all this is expect big things from applying artificial intelligence to your business processes. Uh, you know, we've seen it as supported by some of the surveys that are coming out in the industry now. And it's right there for all of us now. Uh, you know, just as we start to wrap things up. Uh Process AI, as Pete said, has been around for a while.

There's tons of information out there. We've got a video. There are a couple of Academy missions that I point out. One is called Process AI essentials, but the other one that's very applicable is our Decision Management Essentials. And these are all available at Pega Academy. There's documentation. There's even a sample application that you can upload just to get a better handle on how it works and what it does for you. So, um, with that, uh, we'll start to wrap things up. Uh, we're happy to take more questions.

Uh, of course, all the materials will be made available to us, uh, after PegaWorld. Actually, some of them might be made available tomorrow, but if anybody would like a copy of our presentation, please just stop by and see Pete or I afterwards. And, uh, with that, we're happy to take any other questions. If you're close to Mike, we'd appreciate you coming to Mike. If you're not close to Mike, I'll repeat the question, so we'll start back here. Uh. He's at the mic now, so we'll let him go first. Sorry. Go ahead.

Uh, yeah. My name is Kiki. One quick question is for adaptive models. How do we put guardrails around. Like what is the data that is feeding in. For example, it might feed in incorrect decisions as well. Right. At what point we say that, hey, this is the right decision and I want model to pick it up for the future versus stopping some of the uncluttered data. Okay.

I think I got most most of it. And if we could talk more after. But the adaptive model by default is going to evaluate every, uh, you know, every bit of data that's available as part of that application, the model as part of that closed loop automated process. It will look for correlations in data, uh, and it will not use data that's highly correlated. It will determine which variables are highly predictive and which ones aren't. If it's not if if the variables aren't predictive, then they won't be used. And you're part of the, um, the data scientists or part of the assessment, which is pretty easy, is, uh, you know, there's bias checking available. And, you know, if a variable is selected that you just don't want to use, you can just click and and not, not use it. So, uh, over time, it continues to evaluate all the data that's available, and if something becomes predictive that gets added to the model, the weights of the predictors get changed and so forth.

So pretty cool how it works. And you know, I'd invite you to come see it in action downstairs. Thank you. The question in the back or. I have a question. Okay. Is there anybody waiting for the question? Go for it. All right.

So if there are other models within the organization, how easy is it to integrate the Pega models with the other models? So, so typically what you're going to do is use those use those models. So the so two different things. One is if you have other models in the organization, depending upon the type of model, we can either point to it and refer to it and pull the scores, the output of those models in real time into the cases, just like we showed with, uh, with the example, we're treating that as a dynamic variable that improves our decisioning, uh, for some model types and a lot of model types, we can actually import them into the Pega runtime system and have them be executed and managed entirely with by our model uh, ML Operations workbench. Thank you. Sure. All right. We got time for one more question, and then Andy and I can talk. Yeah, we can answer any question you have, but we'll take one more question and then we'll wrap it up.

Okay. Thanks. Um, so you showed on your slide that during a rework from a fourth stage, you can go to any such stage right in the previous stage. Um, what kind of coding do we need? And the conditional, uh, you know, uh, uh, association to a specific task. So we're not going to start over stage rather than being in the middle of a stage because of the, uh, the data and what kind of rework is required. What kind of coding? If there is required, would so. So the question is what kind of coding is required to, um, you know, affect this change.

And so the assumption is that, uh, you know, there's already been routing logic baked into the case. And my, my note to self is to make sure next time we provide more detailed assumptions on our slides. But we've assumed that routing logic has already been put in place. But it's the decision is being done entirely by a person. So it takes that person more time and it and that that the people that are doing that job aren't as accurate as you'd like. Uh, you may not know that. It may be hard to see without using something like Process Mining, but that's what we're that's also part of our assumption. In our use case, we were assuming no change to the routing logic other than we were using a machine learning model and natural language processing model to provide a recommendation to the user so that they can spend less time reading the description. So we're.

We're saving time and that model. And we see this in practice. Those models can be very very accurate does the job better. And you combine especially when combined with the person. So no change in the routing logic. It's just a change in the information that's being provided to the person who is doing that manually. Makes sense. And what kind of SLA impact? Because the second time you go, you probably have to have a different SLA.

Uh, I guess that would depend on on the use case. I mean, it should, you know, what I would expect is that, you know, with this in place, you can meet a higher SLA. Um, but typically this would just help you meet SLA that you have, you know, more pre-configured tasks. Okay. Yeah, sure. Thank you. All right. Thanks. Thank you all.

And we can take some more questions offline, too. Thank you.

Weiteres Informationsmaterial

Produkt

App-Design völlig neu gedacht

Optimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.

Weiterempfehlen Über X teilen Über LinkedIn teilen Copying...