Zum Hauptinhalt wechseln

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 49:21

PegaWorld iNspire 2024: How To: Measuring (and Improving) The Value of Your Next Best Action Program

An AI-driven customer engagement program, powered by Pega Customer Decision Hub™, can generate a ton of value. But how can you measure performance and enable continuous improvement? How do you know which actions and treatments are performing well? What insights can you use to optimize your program? Join us as we discuss client use cases and new features of Customer Decision Hub that support a crucial, yet often neglected aspect of business operations: review and optimization.

I've got a pretty packed, tight set of topics here, so we're just going to jump right into them. Okay. Start with some introductions. I'm Phillip Mann I'm a director of Business Excellence at Pega. I've been at Pega for about seven years, but the role of the Business Excellence team is to help define and drive best practices through Pega internally, with our clients and with our partners. I spent a lot of time working with a lot of our clients around the globe, but that's really enough about me. I'm really excited to have Paul on the stage here with me. So, Paul, why don't you tell us about yourself? Okay.

Thanks, Philip. So I'm Paul Kelly from Bank of Ireland. Um, as a qualified accountant, I've worked across retail, technology and insurance sectors before moving into financial services where I've been for the last 20, 2025 years and I've worked in the two main banks in Ireland, started out in a allied Irish bank, worked there for ten years and now in 12 years with Bank of Ireland. I'm working across various different finance roles before moving into analytics and looking at financial analytics, business analytics and now more recently, customer analytics. For the last three years, where I'm a product owner for what we refer to as customer engagement engine within the bank. It's Pega CDH. Cool. Okay, so if anyone was at the previous session with my boss, Joe, I'll try not to repeat too much of what he said, but you might recognize the graphic. Founded in 1783, it's a full, uh, portfolio of banking services, products and services that we offer across all EY UK and into retail, SMEs and corporate on an international basis.

Um, we've served in and around 3 million customers and as the growth of digital engagement from customers has been aligned to the introduction of the mobile app, that, coupled with a solid approach to our communications with customers across various different channels. So within the app, across email, across paid media and through contact centers, even, it's led to a portion of that relationship the bank has with its customers. So launching in 2022, Customer Engagement Engine has sought to rectify that. Introduced initially onto outbound channels, we've expanded to three outbound channels in 2024 with email and SMS, and in the roadmap we have additional channels in there as well. So paid media, outbound dialers, direct mails and more recently, we've now started looking at the feasibility of an ATM channel with a refresh of that full estate plan from next year onwards on the business operations side, business operations environment. We introduced that in 2023. And Alex here sitting in front of me leads leads that and we use 1 to 1 ops manager, which has increased and improved the efficiency of how we engage with the marketing squads around the briefing process. And then likes of Impact Analyzer, Value finder, Scenario Planner, we're using those to really understand the performance of our MBAs, to analyze how they they are operating, what customers are engaging with, and looking for improvements that we can make to them. So this graphic again, you might have seen this a few minutes ago.

I'll take a slightly different angle on it. It is that consistent growth that we've had with the NBA library. So that's a snapshot at the end of December 23rd. We've now greater than 300 MBAs in the library. Not all are always active. some will be switched off. Either they are either one off messages at a point in time, or there can be a seasonality to some. So. An example of that might be around coming to the end of a tax year for businesses.

We'll issue a reminder NBA out to them as well. There's an even spread across all the different value streams within the business. So down the left hand side you'll see that we cover wealth everyday banking into business banking as well. And the key for us, and the key to the success we've had to date, has been that mix of growth and service messaging. So we would have started with a predominantly service aspect to them. We've introduced growth and that has accelerated in the second year through 2023 and continued now, and it covers the whole the whole portfolio of products. So right across the various different products and also the different topics of service and information and NBA that we might want to issue to customers. Cool. Okay.

Well, thanks for that. I mean, and I think that's just another good example of how Pega we like to use that phrase of minimum lovable product, but starting with a small use case and then expanding the integrations and the channels you move into, building out your business operations and expanding that further, and then just putting more actions and treatments into what you've built. That's a pretty standard approach, and I'm glad that's working. So before we get into the topic today, we just thought we'd take a few minutes to dive into Customer Decision Hub just really quickly, depending on a broad range of people in the room. So anyone that's talked to Pega at some point will have heard of Pega as a Customer Decision Hub as the brain. And it really all just starts with that request coming into Customer Decision Hub for the next best actions. So whether it's the website looking for three actions or CDH itself processing all the outbound customers, looking for the next customer and what the next best action would be. So first thing we need to do is load that data into memory. And here it's really sort of historical data, near-time data and real time data.

Once we've got that profile, the next thing we want to understand for the real time interactions, what's actually happening at that moment. Like if they're on the website, what are they doing if they're in the call center, what's the reason? But then the next bit we want to get to is for all of the actions we could present to them, what's the actual customer's propensity to click on it? Do they actually want this action? And we want to also calculate what's the value to you, the client, for presenting that to the customer. And then really that's sort of the heart of the arbitration process is balancing what the client wants and what the customer wants. But then when we figure that out, what's the strategy? What action do we actually take? What action do we present, and then what treatment do we use in that channel to present it?

And again, we're looking to do all of this in a few hundred milliseconds. Yeah. And the phrase is interesting. So Pega brain is a well-used phrase within Bank of Ireland in terms of how we engage with our stakeholders. Um, and again, just before, because we have such a large share of the market being one of the two pillar banks, customer data really is a rich source for us in terms of understanding the customer, what we should be talking to them about and how to do it. And we complement that with three different activities within the team. So we have a commercial insights team who work with the business to align the business strategy with what messages we might want to send out and how we want to communicate with customers. We use our predictive modeling team to look at what the propensity of different customers is, uh, to engage with some of our targeted products, and then we use measurement both before and after. So what value are we looking to derive from any action or any communication that we might put out there, and that we'll measure that either in a revenue up, across, down, or even in terms of NPS or SES score.

And then we'll come back retrospectively and see have we hit those targets or not with the different messages. And then we feed that into the brain for the eligibility, suitability, applicability to be applied and on out through our various channels. Great. So to take what Philip has spoken to and look at in that in terms of a real life example. So if a customer visits our website, the website will put in a call to Pega for three actions that we might want to present to that customer. So the first thing it will do, it will load the customer action library. So that will be not just growth messages. It will be our service our retention messages as well. And then by applying engagement policies, it will seek to understand what of those actions can be considered at that point of time for that customer.

Then we will look at constraints and we'll apply these constraints to those. So what is it there or is there any reason why we couldn't show a message to a customer at that point in time? And that might be, for example, outbound contact limits depending on the time the customer visits. And then we'll move on to arbitration. So whether it's based on propensity based on context or value or business levers, each of the next best actions that qualify to be shown to that customer will be scored, will be ranked, and then the top three will be returned as the next best actions to be presented to the customer on that channel, or any other channels that the customer might engage with us. Great. Thanks for that. So the next final piece before we actually get into the real topic of the day, is just the idea of this business operations process, just so you can understand where this fits in the whole cycle. So really, this this isn't rocket science.

It should make sense to people even without CDH. But it all starts with requests from the business. So we've got an idea. We capture that request. The next thing we want to do is refine it, get all the details we need so that we can move on and plan it. And here we want to sort of size it, rank it, and either allocate it to someone or leave it in the backlog for them to pick it up in an agile fashion. But then after that, really we're just going to build it, unit, test it, and mark it as complete and ready. And that could be one change. Or we could have a whole range of change requests coming in, but we want to release them into production, most likely as a collection of changes.

So the first thing we'd want to do there is simulate the impact of those changes in the production environment. But then once we've done that, complete it, say it's ready to go, move it out into the production environment using our DevOps tool. Deployment manager would be the easiest way. But then once it's out there and all these actions are in the wild, we want to review the KPI performance. The AI performance and identify some new opportunities. And it's these sort of three boxes here today that we're going to focus on for the topic. So let's just start with the first one. Reviewing your KPI performance. Probably follow this pattern through what we will, which is overall how's the framework working?

How are my actions and channels working and then what insights can I take further from that. So we've said we've got this, this, this brain. It's thinking across all these channels making these real time decisions. And we've got various levers and dials that we can use to configure it. But the question might be what's the impact of those choices I've made. And the first place you can come to to think about that is Impact Analyzer, which is an out of box solution that we have for control groups. But it's specifically focused on certain parts of the framework to help you understand the health of that configuration for that area. The main, like the first one, is rather than present, like as Paul had mentioned. Rather than present the top next best action from that list, we'll pick another one that they were eligible for randomly, and then we can just see what the lift is across that issue, the group the actions down to that lowest level.

But there is another one that focuses on levers. And so we've said you can apply some business levers in particular areas to weight certain actions up depending on the scenario. Um, and this here we just for a small sliver of your population, when we make the decision, we'll give them an action that was based without the levers. And again we can measure the lift. Maybe you're seeing more, um, presentations but less responses when you've actually levered them up and forced people to see things they didn't want. Okay. So that's impact analyzer. Yeah. And we're starting to use impact analyzer ourselves.

So we still use control groups today across some of our outbound channels, where we're starting to use impact Analyzer in the test piece with the Commercial Insights guys, and back into the marketing squads to see how we can start leveraging this capability and in time, move away from using those traditional control groups for measurement. And there can still be a place for those. But this is a good sort of view into the framework and the actual inner workings. So my first question would be potentially I've got this idea. Maybe let's just stick with the levers. I'm seeing something with some poor performance from my levers. Better off maybe without them. So I can come into scenario planner. Scenario planner is a tool where I can run a simulation.

Um, so what we've got is a sample of our customer data and all their interactions being brought in from production into the business operations environment. So I can run a simulation on those real customers and imagine they were either an outbound run or an inbound channel. And I can understand the the reach and the distribution that they would get when that framework ran. But one of the options in scenario planner, I can choose to run one without levers, so I can compare the two together. So again, I could literally go in and say okay, what's the impact? How can I compare the reach and the impression? So maybe I get more reach with my levers, I'll be showing more of a particular action, but the actual responses will be going down because again, people have a low propensity to click on that. So I can come at it from the macro view, the big high level view. But then the next option it could be I could go to customer profile viewer and look at it at the micro view.

So um, it does what it says on the box. You can view the customer profile, but you can't simulate next best actions decisions using customer profile viewer. And again we said we brought that sample of real customers in with all their rich interaction data. But you can also use personas, which are like sort of test customers that you create that represent certain demographics that are important to you. And so here I could run a simulation of an interaction of them on the website using maybe one of these personas. And again, I can look and see how all the actions are filtered out, how they're ranked. And again, I can look and see what the impact of those levers are. Yeah. And again, we're using personas ourselves in a test perspective around understanding what criteria we might use for different messages to attract different audiences.

And just the earlier this year, the personas that we've been using within the bank have been refreshed, one to to reflect the changes in the profile of our customer base, but also to reflect the increased capability and targeting that we've got now available to us within customer engagement engine. So we can be more specific around those personas. Great. Okay. So next question would be I've said we've looked at impact Analyzer and see how my framework's working. My next question is how are my next best actions. And those channels themselves performing. And again first place we try and work on relatively good naming convention. If you want to know the performance of your actions, the first place is go to action performance.

And here we said we're bringing that sample of your customer data in to that business operations environment. But we also bring in 100% of all of the metrics like your impressions, clicks, decisions, all of those adaptive model learnings. It's 100% of those. So here in that business operations environment, I can actually use this dashboard to see what would be sort of standard business reporting directly in the tool. Um, but what we've added in version 24 was an ability to export that in Excel as well. So it's not just a static view, it's actually a fully working dynamic spreadsheet, and you can choose all the filters and see all the trending. So the idea would be here if you've got someone in the line of business, a product owner that either can't log into CDH or doesn't want to, you can still export this and share that to them. And then again, thinking about how do we present this data and this information to people in the right way, at the right time in context? So if CDH is where you're managing your entire framework for operations managers, where you'd be building your change requests, but also building those actions and treatments.

So here where I've got my action catalog in Operations Manager, it's the same data source, but we're really just focusing on individual actions and their treatments. And I can see those same metrics there. And then the final piece, um, Kerim obviously stole some of the thunder this morning, but there is CDH Assistant so GenAI large language models pointing again to that same data source. It's still early release. You can query about the configuration of CDH. How many channels do I have active, how many actions and treatments? But you can say, what's the click rate for my credit cards in web? And again, it's just going to use GenAI to call that same data source and return it back to you in a conversational manner. So I could know what I want and go to action performance.

I could be working with actions and see them there or in an unstructured, conversational way. I can interact with CDH Assistant. Yeah. So from our perspective, the action performance capability within 1 to 1 will become really key. We currently use summary information extracted from interaction history to be able to analyze and monitor the performance of our next best actions. We pulled that out. We report it manually. We share it with squads. But the intention around this will be again using our Commercial Insights team and to reach out into the squads and give them that capability to be able to review themselves, but also dive deeper, understand it from a demographic, from a customer profile aspect.

Cool. So this is in that context. This is how we report and this is how we look at the performance across our MBAs. So we focus in on our top performing service and growth MBAs. We will have obviously we've got the full library and we can look at all of those if we want to. There's service messages in there that will always be relevant to customers, so they're worth having there. But we'd recognize that engagement with those can be very much point in time for the customer. So we focus in on our top performing ones and then taking that data, that interaction history data. We then reach out to other parts of the bank to compliment that data and to understand the.

So what what impact are those MBAs having on the business. So from a service side, we will look at engagements across our websites, the likes of different articles and different informational articles that might be up there, whether they're fraud, whether they're financial well-being. And then we look at are we able to impact the operation side of the business at all? So volume of calls coming into our contact centers, for example, in raising the awareness of the customer to be able to self-serve and with the likes of suspicious transaction that they might see in their statement and not too sure what it is, they don't need to ring the call center anymore. They can self-serve. And that's through the website. And on the growth side, we reach out to our fulfillment systems to see can we trace where a customer has engaged with one of our next best actions? What's the impact on applications? What's the impact on drawdowns?

And the bottom point, the doubling of the call to action, the drive. So what we see when we extend our review period from 30 days to 120 days from when the customer first clicked on a certain NBA, we can see a doubling of the volume of customer fulfillments that we can trace. So that's key to us learning and understanding the interaction with the MBA's and using those to manage our suppression rules and our contact policies. Trying to find what is the optimum length of time and number of times to show an MBA to a customer. Thanks. And I think, honestly, it's great just to see an example of what a client is doing just so you can get an idea of what other people are doing. And I do want to say we've got action performance, but I don't think it gets rid of these external dashboards where you're joining in lots of rich business KPIs that aren't to do with the sort of the day to day running of CDH, but really tying in other things like net promoter or customer lifetime value. Okay, so we're doing okay for time, but we'll move through. So the next bit would be we've looked at our performance of the KPIs.

Now what about How's AI working? We've said that the propensity is a key part of that arbitration logic. So again, first play, second luck is impact analyzer. One of those tests that ship out of the box is we'll measure the health of the propensity of those adaptive models. And what we do there will be for again just a sliver of those decisions. Rather than use the propensity from the adaptive model, we'll replace it with a random propensity. And again we should be able to see a much better lift. And we can dig in. And maybe it's just the lift is weaker.

And certain actions and treatments or issues and groups and we can dig into that. And so Impact Analyzer tries to tell me where it thinks I should go next for my first step. And here it's pointing me to Prediction Studio and Prediction Studios the portal that for for the entire of Pega where you can sort of build, maintain and monitor all of your AI models, whether it's adaptive cloud or Cloud, whatever else. But here I could go in. I could find those particular issues and groups and drill in. And you can see the sort of standard reporting here with bubbles being the volume and then success and performance. But the point would be I can see an impact analyzer, I can then which is in CDH, I can dig into a portal prediction studio, which is designed for looking at managing your models. But we can go one step further, which is looking at this in Python or R, and this is something I mentioned a little bit a couple of times. So there's um, there's a set.

This is like a library and a framework of code for, for querying all of the underlying data in the adaptive models that's available on GitHub. Um, it was initially built by the Pega data science teams. The product guys themselves built these tools so they could query the models. Now it's shared. Um, and it's all, like I said, available on GitHub. They keep it up to date and there's just a few things to think about there. The first one is that all of this underlying data that we've got with the adaptive models, it's not proprietary and hidden away in sort of a black box. It's all there. You can export it.

And then we've got the tool. So when you when you get to the final level and you really want to dig into these adaptive models, it's probably your data scientists that want to do it. And what we really wanted to do was meet them at their place, which is using tools like Python and R to use this stuff rather than saying, go fiddle around in production studio. So what we've got to here then, is we can get a high level view from Customer Decision Hub. I can dig in further into production studio, and when I want to go that next level further, bring my data science team in and let them query the same information but using the tools that they know and love. So I'm just going to follow the same pattern again. We've looked at my adaptive models overall. Next question is how about my actions and treatments? One of the things we've got is when you're looking at your actions and treatments in Customer Decision Hub, I can dig in here and I can see this sort of spider radar chart of the predictors.

And I'll mention that in a moment. Um, and their performance. So I can see at the lowest level for an action and treatment what's driving that model success. But at the same time in customer profile viewer, when I'm in customer, when I'm running a simulation for an individual customer or persona I can actually choose to drill into not just all of those actions, but the propensities and how they're being calculated. And again, I can hover over them and I can start seeing the individual attributes that are driving those predictive models. And then finally, both of those tools will, um, hotlink me back into production studio to look at that individual model if I want to take it to that next level further again. Final thing then on this topic of AI is we've looked at the overall set of adaptive models. We looked at actions and treatments. The next question would be what's driving that performance?

So before I get into this, the one thing just depending on the audience here is there's a in the adaptive models we have predictors. And so what we've got is you've got a data model. Let's just say you've got your customer data model with 1000 attributes. It may be that you have 500 predictors in your adaptive models. And those predictors are the properties from the data model that are driving and feeding those adaptive models. So if you imagine that I've got these 500 predictors and maybe one of them's like age. Um, I could have age between 15 and 115 as the models are building for every action and every treatment or multiple treatments in a channel, every one of those has its own model that we build automatically, and we're Binning at the lowest level, those values. So maybe between 15 and 21 all perform the same. So you can see all of that data that we're calculating and creating that's feeding the models by going into Prediction Studio.

But again if I follow my pattern here, if you really want to get to that next level of what's driving the performance, we can use, um, the there's like a health check tool in the Pega data science toolkit. So again using Python have your data scientists run this. And this is sort of showing you all of your predictors how they're performing against each other and sort of the correlation there. Yeah. And we've started using these now within our own team. And the first time we looked at this, there were some predictors that we expected to see in this and actually weren't there. And on the back of that we were able to make the changes and see them coming through. But it's in terms of even just having this view, we may not have otherwise picked those up elsewhere. So that in itself, even it's a good view to have in terms of just the sense check and health check that the NBA is and the model supporting them are operating as we'd expect.

Yes. And you're not the you're not the first. You're not the last that we'll look at this and go, that's not what I expected either for good or bad. Right. But again, we just want to have that transparency. And especially with the data science teams, we want to try and engage them and engage them on, you know, at their place. So here's another slide that Paul was gracious enough to share. So this is an idea. I'm going to sound like a sales guy a bit.

It's a bit statistical, so ignore all the numbers. But if you just look at the chart at the bottom with the red and the green, what this is showing is this is the last territory for a particular action in that channel where this is for that customer. This is the last territory they made a purchase. And what you can see here is that in the red, these are the territories that adaptive models figured out that actually these people from these areas have a negative likelihood of clicking on that action and the people in the green have a positive likelihood. And it's doing this across all of those properties that are in the data model, and it's doing it for all of the actions and all of the treatments and all of the channels. So we have like honestly, a ridiculous amount of data. And half of the challenge is actually how to present it. And again, that's why I'm sharing the Python tools here, because this is a treasure trove of information for your data scientists. So then I think the next thing would be how do I identify opportunities.

And by that we've got these three topics here again. And we'll sort of follow the same pattern again. Like how can I improve my framework? How can I improve my actions and treatments, and then how can I actually just get general bigger learnings? So first one. First tool. I mean, there's many ways that you could figure out how to improve your framework in CDH, but we built a tool specifically to solve for that problem, which is value finder. And so value finder, what it does will run a simulation on those sample customers again. And um, because because we know the framework we can look at it and we can see what's happening is, is the those customers, each of them in turn to progress through the framework.

We can keep count of what's going on, and we can show it back to you in ways that make sense. So the first thing we really tried to do, the pie charts on the left, what it's doing is for all of those customers, the red is which customers are actually getting no actions at all. So for that sample of customers who would have actually got nothing at all because of engagement policies or whatever the yellow ones are, which customers are only going to get irrelevant actions, and by that we mean which customers are only going to get actions where they have a low propensity to actually accept it? The green one is they're going to get at least one that they're interested in, and the gray is ones where their new actions and we're still not sure yet whether they're, you know, relevant or relevant. Another piece that's interesting here is what you can see is this is showing me sort of, as Paul said, with eligibility, applicability, suitability in the arbitration scoring. Here everyone's kind of going through happily and nothing's changing. But if we had really heavy levers, that green bar at the end under arbitration would suddenly become a yellow because I'm now most likely forcing customers to see something that they actually don't have a propensity to click on. Um, and then the the other thing here would be if I want to know where to go directly from that, what would be the next thing to look at? I can go back to customer profile viewer.

I can pick one of these marketing segments that are important to me or sampled customer, and I can run that simulation again across different channels. But here I can choose the dynamic and I can actually look and explode out all of the the business levers that I'm able to create for in Customer Decision Hub. So again, if we're having a conversation, maybe the first conversation with the business is, look, this is the impact of the business levers that you're asking me to, to create. And then you can actually dig into some sample personas that make sense to them and say, and at an action level for an individual customer, here's how it's playing out. It's forcing these things to people. Or, you know, we're losing these actions because we're having to show something else. Yeah. And we're starting to use this ourselves again with the Commercial Insights guys and feeding back into the marketing squads, and not just on what my class classes poor performing MBAs. But even in the MBAs that are performing well, we can still see elements in there where customers might be seeing them as a irrelevant action.

So understanding what they are and understanding what the components are that are driving to that classification on them. Cool. Excuse me. So that was the framework. Then the next question again, if I follow my usual pattern, which is okay, what about my actions and treatments? How can I improve these. And again the first place we can come to is Value finder. But this time and this one's kind of interesting. So when Value finder runs it also looks at the people that are getting irrelevant actions.

And it looks at the it tries to create you segments sort of cohorts of people. And the definition of them that are only getting irrelevant actions. So statistically, like the examples that we've got here, you can see, but it's actually the properties in the data model and their values. And then you can take this and actually say, well, here are some people clearly that I could engage with in a better way. Yeah. And we've worked this through on various different ones. And one example we've looked at is around lending. And what we've found in here is that one of the detractors has been around the customers profession. So it's not something that we would typically use in terms of our criteria against the MBAs.

But looking at customer profession, then what we're working with the squads then is saying, okay, what additional treatments can we put in place with that NBA to address that and focus on specific professions at a certain point in time or with a certain content? An example might be using a message specific to school teachers when it's coming towards an end of term around May or June. Time frame. Right. And I mean, you might not know it, but this is a scripted conversation. But when we did have this conversation initially and we went through this. It was actually nice and surprising to see that what you're describing. We thought it was useful too. And in version 23, what we do is the value finder process.

We kick it off automatically as well on a scheduled basis. And we take that same those cohorts, those definitions. And the first thing we do is we give them to GenAI and we say turn these into some marketing things. And you can use them some marketing definitions. And I think they're the same ones that you saw maybe on the keynote from Kerim. Again, it's that same logic. We're taking these people that's under the covers how it's working. We're taking these definitions of people that are getting irrelevant actions. We give it to GenAI and say, make these into marketing personas.

And then what we do next is we take those and we drop them into operations manager as a backlog of work. We don't build them, but we put them in there. So this thing runs and you can just look at them and say, actually, yes, I could see the sort of, um, students, whatever the definitions are, I'm not very good at the marketing side. These people need some treatments and you don't have to go in and create the the in operations manager all the way through. We've already pre-configured everything. It's just dropped there. And all you need to do is create the creative that feeds that treatment. So we're trying to speed that backlog process up. Which then brings me to the final piece, which would be actually.

If creating more of these treatments is actually maybe going to cause a backlog, there is the feature as well for GenAI again, which was touched upon in the keynote where you can this was in version 23 where we can help create some of that creative for you. So again, ChatGPT, um, what we're doing here though is we're not um, that is annoying. We're not, um, Creating it and just not sharing it with you. This is literally these this these are the the ideas that have come back. And you can tweak them, change them, adjust them, take them or leave them. Okay. But the idea would be I can find people who aren't getting treatments. We're going to build them for you. We're going to put them in the backlog, hopefully get them out to market.

And again, we know from Pega that having more than one lots of actions and more than one treatment in the same channel gives those models a much better opportunity to find something relevant for each customer. Okay. And so the final bit on the topic here that we'll end with is how can I find areas to improve my adaptive models. There's a feature in 24 called Feature Finder. And this is just something kind of interesting. So your adaptive models in production it looks at those and it looks at all of your data and your data model in production. And it says which of those attributes are you not using would be useful as predictors. So we actually do some back end correlation and a whole bunch of simulation. And it just it runs in the background and it'll come back.

And you can log in here and it will say okay we think you should get rid of these predictors because they're not doing anything. And you should replace them with these. And then final piece coming back is I'll just end with the the the the python the data science toolkit. Sorry. And again this is something that you guys could have looked at. And it would have highlighted that there was some missing data. We have dozens and dozens of reports in there. You can start with data scientists. You can see there's a little carrot with code.

They can just click on that and see the actual code, copy it, put it into Python and fiddle around with it and make it work for them. But the last piece I wanted to really leave on is when we talk about areas to improve my adaptive models, I couldn't quite find a way to make it fit in the title, but you can improve the adaptive models. You can have your data scientists come in and help you find new data aggregates, improve the quality, but also that final idea of getting value back into the business. You are, as you know, clients of Pega. If you're using adaptive models, you're sitting on a data mine, a gold mine of customer information. You are literally pulling your customers in every channel all the time, and we're capturing who likes what and who doesn't. And again, using the The Data Science toolkit, you can really get into the nth degree of that. And so I think there's two bits here for this one. It's really good to talk to.

You can use this with the data science. People can take this and learn from it and improve models. You can talk to your business stakeholders. And we've done that with I've literally done this with a few clients. And we've gone through these with product owners. And some of them are just like that's kind of impressive, a bit scary, but I could do that myself. Like, these are good. These are what I would call marketing personas. But I'm like, well, but then you don't have to have the targeting.

That's the point. You've kind of answered my question here, which is you've just said to me, we can figure out your customers. That looks pretty much like what you do. So then let's remove some of the targeting. Let's use adaptive models work, and then we can use this as a business case with them to go forwards. So that's just my final piece here is that if you are using CDH and you've got adaptive models, there really is a lot of insights that you could glean from those. So today it's really been reviewing the KPI performance AI performance and looking at opportunities. Okay. And we have potentially around three minutes for questions if anyone has any.

I know I always seem to get the session at the end around cocktail time, but if anyone does have a question, you're more than welcome to ask now or ask us offline. All right. So in the viewfinder you talked about that you can use value finder to identify irrelevant actions as well. But all the actions would have been filtered out either by eligibility applicability suitability. So can you throw more light on how value finder is able to identify the irrelevant actions that are being offered to our consumers? It's it's it's figuring that out based on I mean, they've had to been presented it. So you'll have been filtered out by eligibility applicability suitability. But of the ones that people did get presented, it's then going to look at those and say, actually the propensity was so low was below a threshold. And actually so you're presenting them something that they had no likelihood, but it's it's not able to figure out if you didn't present it to them.

Again, it's just really at the end of the day, we're counting right at a crazy level, and we're just looking to see of all the ones that had a low propensity, did they actually accept it? Does that make sense? Got it. Makes sense. One more question I have is we don't have a BOE environment right now, but we have a UAT environment and we go through a lot of pains to actually stage data, uh, create dummy data to sort of test the new releases. Can some of that be used? Can be done in BOE environment as well. BOE would have statistically relevant population of the actual production data. So whenever the new release is happening we just test it out whether the actions that we really want to trigger, are they working fine or not?

Business operations environment is really designed for the business to run their business. So no, it's not a shortcut for UAT or something. But so the main thing for all of this is to use we have data migration pipeline, which sort of handles all of that, like the data that's in production gets encrypted and it's all a bit technical, but it's it works in a specific way to keep the data secure. But if you use Deployment Manager, it will bring the data from production into that business operations environment. But you could potentially use the mechanism to bring it into another environment. But we'd have to think about what you're trying to achieve in there, because the main purpose for us is to get all of that. Historically, maybe if you wanted some of this stuff, you would have to sneakily log into production and hope you're not impacting performance by doing some testing in production, or you'd export all your data and sort of mess around with it offline. So we're really trying to give you the visibility and availability of the data in production, but in a safe business space. Okay.

Thank you. Sure. Of course. Any question over there? Thank you for sharing and explicating a full meal. It was great. Um, my question is about what is your dependent variable? What is positive? Is it just click through or is it actual dollar signs or purchased confirmation of a confirmed purchase?

Um, building on that question is the title was value. Is value just we're getting more clicks or we're very little dollar signs on the slides where you're actually saying, we made 4 million or we made 2.5 million or 600,000 was at risk. Or I'm just curious about that part. Maybe it's just proprietary. And you didn't want to put dollar signs on the slides? No. Not necessarily. I mean, yeah, and without necessarily getting into the specifics of it, it will very much depend on what the next best action was about and what the purpose of it was for. And that's why we reach out to various different systems and different areas across the bank to understand what are we trying to achieve with a certain Next Best Action so the service ones can simply be around trying to build that engagement and build that trust with the customer, and that might see itself coming through then in terms of an improved NPS or improved SES.

Equally, on the growth side, it's not it is looking at in terms of can we see a growth and do we see revenue up? Do we see improved conversion rates. So we may not actually see an increase in the bottom line, but we might see a better conversion. And the more targeted the next best actions are. In terms of achieving those goals, we're freeing up other space to talk to customers about something else that is more relevant. So reducing the number of people we might talk to around a certain NBA it might necessarily be to drive increased revenue, but it's freeing up the opportunity to talk to the customer about something more relevant. And do you have some models and treatments where the ultimate dependent variable is dollar signs, and you're able to fill that in, or is everything just conversion or clicks? We don't we we don't use the value today, but we could do so within the within the context and within the business levers and within the value we could do so. We could allow that be one of the factors that helps in that arbitration process around what to present to the customer.

But we're not sophisticated enough today to put we don't put a target in there as sense and saying this is what we're trying to achieve specifically with this NBA. And I do think there's two points to your question. So there's one which is what's the outcome that I'm measuring against? And that could be do I want clicks or conversions. And so we can you know, we we can feed back that outcome could be conversion for example. But there is, as Paul said, like in that arbitration for every action you can define that value and that's the value. So if you go to like C Scenario Planner, I didn't dig into it too much. But it's I know how many people are going to see it. I know what their likelihood to click on it is, and if that action has an individual value, then I can multiply that and say what my projected value is.

And again, you can do that in Customer Decision Hub. You can break that value down so you can extend it. So it would literally calculate if you can calculate a customer like a CPV or a net present value for that action for that customer, you can use that at decision time as well. So that's balancing what the customer wants with the business value. But again it's not optimizing the click on that. It's just it's optimizing the priority of the action based on the value. Thank you I feel double fed. Yeah. There you go tech teams.

Thank you for the presentation. It's really interesting. Um so my question is relating to the feature finder that you mentioned. And that's coming in 24. Um, you mentioned that like there's um, another profile. Cement. Right? So I'm just kind of getting trying to get clarification on do we need to have a back end profile where, uh, we want the feature finder to check to see, hey, there's these features could be important in the existing current actions sets that we have or how does that work? Because we have a profile, right.

And where does this new features come from. Do we have to create a back end somewhere? No. So it would be if I understand your question. So if you've got a customer profile, which is all your data properties and they're all used, if you're not using all of them as features in the adaptive models, it's that so you'd already have them. We're not going off and looking outside of Pega's profile, but we're looking at all of we're looking at all of the the properties that are in production in your profile and saying you should use these as predictors. Okay. So it's still from the existing profile. Exactly.

Okay. But you could You know, potentially that could be extended. I don't know, but we'd have to have access to those sources. But the main purpose of it is to sort of keep the number of predictors to be focused really on good quality predictors and do housekeeping on those predictors and really say, actually, you maybe you're missing a trick over here. Absolutely. And so in 24 follow up question in 24, do we have a limit on how many predictors can we have. There's not really a OK there's not a technical limit. But I think like in a standard sort of service health limits is about 500. And it just depends like you could potentially have more.

But the more actions and treatments you have I mean at the end of the day, every action, as we said in that 100 milliseconds, every action for every customer in every channel, we're building a score based on it. And the more predictors you have. And if those predictors have lots of values in it, just it kind of exponentially grows. But so there's no hard and fast. But our sort of service recommendations about 500. Thank you. I'm excited to learn that you have a Python SDK. What's the minimum version of the platform that you can use that with? Is it just any.

Yep. Anything? Awesome. Yep. Glad you're excited. Uh, there's, um. And I mean, that's really, um, if you go down to the Innovation Hub, there's one of the guys. So all the product owners for all the bits I've talked about are all. They've all been roped in.

They're all there on the booths. So there is one for Process AI. He's the guy that owns all of that stuff. And then there's another one. Um, yeah, you can find, but I can point you to those literally. You can go talk to those guys. It's really cool. All right. I think we're done.

We're keeping you from happy hour or whatever it is. Okay, well, thanks for your time. Okay.

Weiteres Informationsmaterial

Produkt

App-Design völlig neu gedacht

Optimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.

Weiterempfehlen Über X teilen Über LinkedIn teilen Copying...