PegaWorld | 45:08
PegaWorld iNspire 2024: Sparks of AI driven Autonomous Operations
How do you keep control of AI while reaping benefits for your business and your customers?
Join this panel discussion to hear how existing clients are leveraging Pega to transform their customer experience and business operations with the latest in AI. You’ll hear lessons learned, best practices, and other useful tips to unlocking autonomous operations with AI.
Welcome to our panel here on sparks on AI driven Intelligent Operations. My name is Peter van der Putten. I'm the director of Pega AI lab, so I'm responsible for AI innovation both at our clients and within Pega. Um, and I'm joined here with a number of esteemed guests who are going to shine their light on this, uh, on this particular topic, um, the, um, uh, the actual the, uh, name of the panel, it's a little bit of a play. It's a bit of a geeky joke, because there was this paper written by Microsoft on sparks of artificial general intelligence. Now, I think we're extremely far away of it, but I thought it would be nice to kind of nick the title and use that here for our panel and to tap a little bit into into that AI story. There's a lot of talk about the AI, but but, you know, walking the talk is a different story. You know, like, uh, really, um, really being able to use, uh, smart approaches in, in operations. It's something where, um, yeah, basically you can take it, you can take it step by step.
And, um, in that sense, having this very kind of mythical view of intelligence. Uh, yeah. It doesn't, doesn't really help. I think there's, there's small steps you can take towards already getting, uh, getting some benefits. And we have a range of, of, um, people here in a panel who can talk from different perspectives, perspectives. There are maybe also at different levels of maturity in that sense. But I think that that that will be nice to, uh, to get an idea how you cannot just talk the talk, but how you can walk the walk and particularly how you can get started, uh, even with, you know, very specific, concrete, uh, things. So, um, let me see. I don't know if we have the speakers on the next slide.
We do actually. Um, so, uh, yeah. Let me then maybe, uh, start with, uh, maybe, uh, getting everyone giving everyone a chance to introduce themselves, um, and maybe tell a little bit about, uh, who you are, but also how Pega is being used in your in your, in your company at large. Um, not so much specific to the topic just yet. Uh, maybe also, yeah, given the topic of today, um, what is it, what you maybe find interesting about that particular topic? Uh, and I'd like to start with you, Martin. Yeah. Thank you. Peter.
Yeah. Martin Hawkins, uh, I'm on the tech domain manager or head of it, responsible for all applications that we build for digital customer processes, all or all our digital customer interactions. Uh, I do that within the Rabobank. The Rabobank is a Dutch bank. Uh, Ellen mentioned this morning that the Dutch are always ahead with implementing. I have to agree with you. So I hope he also had the Rabobank in mind. So now we are a bank with 9 million customers in the Netherlands. In the Netherlands we basically are mainly a retail bank, uh, number one in mortgages, saving SMEs, etc..
Uh, and international, we have a small presence, uh, mainly focusing on the big transitions that there are now in the world. Uh, and from originally we are we are still in cooperation focused on food and, uh, financing in the world. Um, yeah. If you look at the Rabobank, we have 9 million customers. Uh, you are a Our corporation, we want to be relevant and also to our clients. We also always have been pretty close by our clients in the communities and of course in the digital area, that's becoming more difficult and difficult. Uh, and within my role, it's also up to me and my teams to have that digital presence, uh, in these times. Um, yeah. If you look at Pega, Pega is helping us with that, but I think we can elaborate a little bit more about it later.
But we started, uh, with Pega, I think it's more than 15 years ago in the payments, uh, domain where we implemented, uh, smart investigate. We just upgraded that to a Cloud three of Pega. Uh, there's another presentation about that somewhere these days. Uh Ricardo Bichi. With Pega, uh, within the business lending domain, also within housing. And five and a half years ago, we started with a Pega Cloud in the Customer Service, CRM, marketing, etc. domain. Yeah, thank you very much and I can really relate to it because I'm a customer. Yeah.
Thank you. You were telling me it's a cooperative bank and with roots in in a small, small community. So I grew up in a small farming village in the south of the Netherlands, where, you know, you would just walk into a branch and you could do your business transaction in one minute, but you would have to spend 20 minutes talking about all the things that happened in the village. But the nice thing, of course, is now in modern day and age, how can you maintain that relevance and that that kind of personal, uh, approach. So I recognize that as a customer as well. So thank you, Jamie. Maybe from your side. So my name is Jamie Moorhouse. I'm a business architect working for Lloyds Bank in the UK, and Lloyds Bank is the largest bank in the UK.
And I specifically work in the general insurance Platform. Form and we use Pega is used widely across the group, across a number of business areas. But as I said specifically, the work we've been doing within our general insurance business has been on the home insurance claims journey. Obviously, we're going to a little bit more detail later, but I'm really excited to sort of share and continue the journey of using sort of AI and data led processing to really improve the outcomes for our customers and our colleagues as well. So yeah, happy to go into more detail later. Yeah. Thanks, Alex. Hey hey I'm Alex. I'm with Accenture Federal Services, and I sit within our Platform elevation lab as our platforms AI lead, which means I've helped make sure that we build or take what we've built at other clients and can build composable assets, accelerators, prototypes that we can bring to other clients, thought leadership, making sure we're upskilled and at the forefront of all of those platforms, and the different AI capabilities that are coming out.
And then on the delivery side, from our functional lead for large investigative case management implementation at a US federal agency. So excited to talk about a little bit of both of those today. Yeah. No. Awesome. Yeah. Maybe we can just jump in, uh, from the perspective of that, that that federal agency because I think it's, uh, of course we cheated a little bit. We did have a bit of a chat beforehand, but it struck me really as a story where when people hear AI, they think like, well, robots take over the world, whereas there's there's also ways to start in a very specific, concrete way on, on a very specific pain point or problem. And, and this project struck me as, as a very good example of that.
So maybe you can elaborate a little bit on, on that particular engagement. Yeah. So a little further down the value chain of AI than taking over the world like you mentioned. But um, so we support a client who investigates cases of waste, fraud and abuse. And they can be anything and everything, from someone calling in and saying, my neighbor is on physical disability, but I see him mowing his lawn every day to my mom is in a nursing home, and I think they're stealing her benefits to run around Nicolas Cage trying to steal the Constitution. That comes in a lot. So some of these are more actionable than others, and they all came through, um, through a public forum. Anyone can submit them. They got hundreds of thousands of them a year.
And the complainant, the submitter, can enter a little information about who the suspect is, who the victim is, and then just one giant textblob to describe what's happening and what was happening on or in our system is that those allegations would be created and go into a singular work queue to be triaged, which meant that there was a growing and growing backlog because people just couldn't get to it fast enough every time you. The help desk workers log in. They pull from the top of the queue. Have to read it. Have to fill out 5 or 6 required fields. And even if it's the exact same allegation, two people are going to look at it and interpret it slightly differently, use different codes. So what we did was apply natural language processing to that to help expedite that. So the overall goal of it is to just give minutes back to the mission so that they can spend that time on higher value items while also, you know, decreasing friction of the experience, getting more data, getting higher quality data as those allegations are moving through. So it started with kind of a business process re-engineering effort to first look at what is an allegation, what are the topics like how would you categorize an allegation.
And they didn't have that to start. What we ended up with was eight different categories of allegations that then we could start tagging them and kind of talking about them together with a common understanding. So what we did is turn the most recent fiscal years data of all of those text blobs, and we were able to turn that into a structured, labeled data set to start training the models on. And then it was just a very iterative process of combining topics, taking some out. Um, just kind of rinse and repeat of different iterations and working with our clients to figure out, well, even if something isn't scoring the best right now, it's such a high value allegation that they still want to move forward with it. Can you can you give some examples of that? Because that's interesting. Where are you maybe overruling these kind of machine learning models? Because that could be indications that, um, yeah, either in both directions that it could be something that could be either ignored or that actually, you know, regardless of what your, your machine learning model says, it's something that has to be treated with a high priority, that you take that into account.
Uh. Yes. So we ended up with a topic detection model and a keyword model. Um, we have different confidence thresholds for each of the topics to either route it or close it or, um, just flag it for someone. And the keyword model comes through and makes sure as kind of one of our safety nets, a safeguard that we have. Um, so overall, we were looking for high value and low hanging fruit for what we're trying to define as topics. So the high value stuff is things that it might not be the most frequent, but we really want to get it into the hands of a specialist. So things like threats, um, things like a child's benefits, we want to get those kicked out right away. Um, whereas some other ones, like the Nicolas Cage example or someone submitting an allegation, thinking it's a different agency completely, you know, we can just resolve those.
No one needs to spend time looking at those. You take into account, like the history of allegations that maybe that people make or those type of things. We haven't looked at the history of it because it's a public form that anyone can submit, and it's not tied to a user ID, right? Yeah. Or do you have a duplicate check? Yeah. Interesting, interesting. And can you say something about results? Like maybe the initial results.
But people focus a lot on what the initial results are. But I'm also wondering whether there's an element of continuous optimization ultimately. Yeah. Maybe you can reflect reflect on that. So we started with a POC and then before we went live we had a lot of simulations, safeguards, a lot of things that we did to make sure that everyone felt good about it before deploying. And then once we did, we were immediately able to see a 30% reduction in the backlog by allegations being able to be resolved right away. And then another 5 to 10% were being routed directly to a specialist, rather than sitting in that queue waiting for someone to pick it up. So being able to see that result pretty quickly was awesome. And then the way we built it was pretty modular, so that as we've gone through, we've been able to iterate on that.
So now our model, um, we refresh the model every few months with more data to avoid any data drift of just the way people describe things like different vernaculars to make sure that it's fresh. Um, we've gotten more granular in the topic. So as we've had better data coming in, we've been able to split one of the topics into two, which gives us more, um, high quality allegations there. Um, and we've been able to increase the correct identification from 88% in that first iteration to now we're at like 96%. Um, so, yeah, the policy improvements as well has been really interesting to see that keyword model I mentioned. It's been very easy for us to update which keywords should or should not flag certain topics as the policy changes from our organization. Yeah. So, um, any any questions from the other gentlemen in the panel on, uh, on this particular case? No.
If not, we'll move on. Yeah. I had maybe one, one question like I can imagine in a, in a public sector, uh, in private sector as well, but specifically also in the public sector and with sensitive topics like, uh, fraud application. You also get into maybe the ethics of AI or how to deal with, uh, bias or. Uh, yeah. Is there any um, can you maybe also reflect on that? You know, I can imagine that that's something that played a role in this project as well. Yeah, that was a big point of concern in the beginning. And a big just emphasis throughout, um, to make sure that we don't, um, And impute any bias or as little as possible, so that people who have noteworthy allegations still get them looked at.
Um, so our solution to that was starting with the historical data sets and trusting the SMEs that we have at the organization. Um, as that first level of validation. So we didn't just take the data, dump it into the model, look at the results. But every time we looked at it, we would look at, well, you said it was initially this, but the model said this. Which one should it be? Um, and we had this means we're able to differentiate. It's like one of the things that jumped out to me was like the term abuse. Um, initially we were going to flag it as a threatening word. Um, but we couldn't do that in the end because it would just overestimate that, because one of the biggest terms that people use with government benefits is an abuse of benefits, or people are abusing using other paychecks.
So little things like that we relied on the SMEs for. Okay. Thank you. So Jamie, yeah, I think there's some commonalities. Maybe also with a project that you're working on, maybe you can tell us a little bit more about, uh, yeah. How Lloyds is, uh, using or planning to use, uh, Decisioning Lead AI in the home insurance journey. Yeah. Of course. So, um, as you alluded to, the use case that we've been working together on is within our home insurance journey and, and particularly our claims journey, and to give some context as to where we were before we started the work, um, we have upwards of 150,000 claims or inquiries every year, and around 450 colleagues in our contact center dealing with those, um, those inquiries, I think it's fair to say we we relied very heavily on colleague judgment to understand how the claim should be dealt with, should be settled, should be covered, um, and the next best steps.
And that proved, as anybody can imagine, quite an inconsistent and at times inefficient journey and almost treating claim as if it was the first time we've ever had that claim and asking and relying on colleagues to deal with that. Yeah, because, I mean, it is a pretty important decision from an insurance perspective. Yeah. It's a fair like, can you give an indication of the amount of claims money we're talking? It's around 150,000 a year. But they range from, you know, a really simple claim could be a customer who has accidentally dropped their television and just needs a new TV replacing to a customer whose house has been completely burnt down. They have no money, no clothes, nowhere to stay. Um, you know, so the the reliance on the colleague judgment was really, really heavy and they didn't have the tools needed to, to support them in understanding what, you know, what the next steps and the right action to take for customers were. And what were some of the decision points in the journey where you're looking to, uh, use more intelligence?
Yeah. So a really, really big, big one for us was the integration and introducing Pega as not only the UI in which colleagues, um, use um to to log and capture the information about the claims and, and ultimately the workflow and process management thereafter. Um, but also integrating it into our digital journey, you know, via our internet banking app and was really structuring the data in which we input, whether that's from a colleague perspective or a customer perspective. Um, to then be able to use that data to input. And we've um, we've introduced a singular intelligent decision engine within Pega now, which has, um, fairly simple rules based parameters that understands the input that's been being captured, as I say, by a colleague or a customer, and can do three really, really effective things for us. Um, one of the first things it does is it understands the complexity of the claim, and that allows us to get it to the right place in terms of get it to an expert straight away, and then that expert can have the necessary training and the necessary skills and capability to be able to deal with the claim in the right manner. That also drives then a good chunk of the workflow management processes, whether they are automated or proactive workflow management that we've got and what the decision engine is also able to do is capture the routing of the claim. So as I say, there is still element of colleague judgment, but that will come with rules based guidance within the system. To say this type of claim should go to this particular type of supplier or this claim is a simple claim.
Let's look to just try cash settle this claim for the customer. Or there's things like fraud and various elements. And then the final piece of uh, of decisioning that it does for us is identify the best settlement route and the best settlement cost. Um, and combined together across those three things, that really provides not only the colleagues with a good level of, um, system support, but it provides a really consistent claims journey and an effective claims journey for, for not only our colleagues, but what our customers didn't really helps us reduce that end to end time. Because ultimately, when you do register a home insurance claim, it's because something's gone wrong. Yeah, as I say, it can range from just something as simple as a TV to something quite catastrophic, and customers just want to get back to normal as soon as possible. So having that system, really driving that proactive and consistent journey, um, is proven really, really valuable for us. Is there no like in the assessment is also kind of helping. It's just fairly one dimensional.
Like we can pay it out or not. And it's the amount. Or are you also looking at like how can we help clients maybe with particular, particular, uh, vendors or uh. Yeah, yeah. So it does range. It ranges from, from all the routing options that we've got. You know, as you say, it's a simple this can be something that we can pay out or we have, um, our own sort of in-house, we call them personal claims consultants or loss adjusters who can go out and really hold the customer's hand through the claims process. So, um, in terms of the use of it today, it's fairly simplistic in, you know, we sort of had the real beginning of the journey and using these rules and using the, you know, the system itself to get the data captured. But it's proven really, really valuable and really consistent at the moment.
Okay, cool. And can you say anything about what kind of targets like what, uh, what kind of benefits you would hope to drive? There's a number of benefits. As you can imagine, if colleagues are using a system that can help them. You know, the colleague engagement is always going to go up and end to end. Time, as I've alluded to, is a real big one for us in terms of how quickly can we get that customer back to normal. Um, and as you can imagine, we are a business at the end of the day. So the amount of money that we spend on claims, um, is also a big, big factor, particularly within insurance. It's the biggest outlier in terms of cost.
But that isn't to turn us into an insurance company that are just really good at saying no. That's to turn us into an insurance company that can get the claim to the right place at the first time so that you're not it's not bouncing around and you're spending money, you know, in that way. So, yeah, as I say, getting it to the right place first time will really drive a good indemnity serving as well. Yeah. And how confident are you now with your models. Are you confident enough to also offer these workflows to the customers directly in the digital channel without a human in the loop? Yes. And that is that's one of the big things for us in 2025. Ultimately, um, one of the sort of big ticket items that we're looking to deliver is, um, straight through processing of claims end to end where it's possible.
So a customer can log on to our internet banking app and can can explain the circumstances. Let's use the television example. They can upload a picture of their TV that they've dropped. We have capability to identify a replacement item, the cost of the replacement item. Because we can input their information, we can make the payment straight away to them. And no human in the loop. Um, but a good chunk of that is going to be with time, you know, getting the system in, using the data to understand and make sure the rules are the right rules and optimize to the right level. But yeah, ultimately that's where we want to be. Great.
Okay. Thank you. And yeah, my time at Rabobank, in your introduction, you already explained makes quite broad use of, uh, of Pega across many different domains and, uh, yeah, depending on the domain, uh, there's also different maturity levels or whether the use of intelligence is something relatively new or whether, uh, it's actually quite mature and there's big deployments already out there. You know, like we could maybe start with an area where the use of intelligence is more mature. I'm immediately thinking of the messaging hub. Uh, maybe you can start with that, and then we can talk about some of these other areas where you're in future looking to, to, uh, to use more intelligence in your processes. Yeah. So I think it's about seven years ago that we were thinking about our whole CRM and marketing landscape. Uh, and at that time, we did see that with the functionality and applications that we had at that time, we could not be enough.
We were not enough relevant for our clients. Uh. And we were looking for. Yeah, let's say an application or preferably a SaaS application that could help us. Uh, yeah. And in that selection process, uh, we choose for Pega both for both for case management and the Customer Decision Hub. Uh, with the Customer Decision Hub, we made a long journey. Journey in the past five and a half years. Uh, that really helped us to be relevant for our customers.
Today, we have 2 billion interactions with our clients every year. Uh, fortunately, uh, 97% of them are digital, uh, interactions with our clients. But we also still have, uh, let's say 3% of those 2 billion, uh, were in, uh, agent is in the loop or people are still really are going to branches And also then we want to be relevant for those customers. And the Customer Decision Hub of Pega helped us, uh, to make the right decisions for every interaction that we have with our client, whether it be in our app, in the web, uh, in our, uh, own channels, uh, chat, video, but also for these employees, we serve them the right next best actions, uh, if they help their customers. And I can imagine those could be commercial messages. At the end of the day, you need to hit. Yeah, there are. Some targets, but there's probably also. It's both.
It's both actually, we have more service related messages or actions than commercial actions. But both of course, in the end help with the satisfaction of our customers that we're helping. And of course, yeah, in the end, commercial actions help. Uh, uh, but also surface actions help to reduce, uh, churn, uh, going away of clients, etc., etc. and we do measure that. I need slides to come up with all these figures. But we really did see quite improvements, uh, once we implemented Pega. And of course we optimized the models year by year. Uh, we're looking at what data we can use, not only internal data but also external data.
Uh, so we're improving quite well in that area, uh, that we almost called the traditional AI, uh, part of our bank. And of course we had. Yeah. Because all these, uh, offers are driven by, uh, self-learning models and there's like hundreds or maybe thousands of those models. Yeah, a couple of thousand of these models. I don't know if that's too much, but, uh, it's quite, quite a lot of models with quite a lot of data points that we use. And they're self-optimizing their models. So yeah, first it was pretty scary for the people that first were making these models by themselves. They didn't really they didn't trust these models.
But over the years, they did see that these Self-optimizing models were working a lot better than the other models. So then that trust also comes by looking at results. Yeah yeah, yeah. And that's also a culture thing. Of course, with, uh, two employees in the marketing department. Uh, not something that we realized in the beginning because as tech guys, we thought, well, these people must be enthusiastic about it, but they were, of course, also very confident about confident about their own skills. Yeah. And the panel is called like Sparks of Intelligence. So I think there's other areas where it's definitely more of a spark, where it's a bit more early days, uh, still more with the kind of the left brain type of decisioning stuff.
For example, can you talk about some other areas where you're looking at, uh, using intelligence? Either could be either more the left brain decision type stuff, but it could also be more in the direction of generative AI, for example. Yeah, yeah. If you look at the left brain, of course, we of course also have AI models in our fraud departments, pretty much like, uh, I think your examples, uh, we have a lot of transactions that we have to monitor. We also build up a lot of, uh, models on that. Uh, in general, in the fighting of fighting of financial and economic crime, uh, in that area. Pega is helping us, especially with the case management, workflow management of all the hits we have, uh, in that department. Um, and if you look at the right brain, which is, of course, most interesting these days. Uh, yeah, we did see a lot of examples, uh, from Pega that we were experimenting on already.
We have a lot of teams that have already experimented with Blueprint. Uh, the last month, uh, yeah, we did see a lot of enthusiasm, but also efficiency, uh, especially in the beginning phase of, of our project, sitting together with your business, uh, and working out a blueprint. Uh. What part of the bank was that? Where they were, uh, trying out blueprints? Uh, we did it in housing, but also in my own, uh, customer processes or customer domain, uh, and also in some other domains. So I think you can use it everywhere. We even experimented if Blueprint could help us with, let's say, an engineering journey. We had Alan, uh, on lunch a couple of months ago and, uh, then our, uh, CTO or the managing board, uh, person is responsible for it was asking that question.
So we also experiment with that. And that was it came up with quite amazing workflows also for the engineer. Um, yeah. And in our customer service department, we are also we implemented already some niche products for uh, call Summarizations. Uh Agent Assist within the live chat. Uh, 40% of our text is now suggested by GenAI OK. And that really helps. We see more than a minute of, uh, time savings within, uh, customer service, especially in the small medium enterprise, uh, calls that we have. Uh, but we started experimenting with these applications with some niche applications, which is good to start with.
Uh, and we don't want to build things ourselves. And we really see us as a taker, uh, from technology that's there, uh, but also complicated to do it with all these small niche parts. And so we're very glad that we can also step up and see that Pega is also offering these new capabilities. And I think that really will help our bank. Yeah, I think also in general that it's I call it spark sparks. But you want to start small, but you also want to have the ability to kind of scale up and make sure that it plays into the uh, yeah, maybe the, the strategy at board level even I don't know if it, if it gets that high, uh, within Rabobank whether. Yeah. It does. And uh, our CEO was a couple of months, he was saying, well, all this GenAI, it's a complete buzz for me, and I do believe in it, but everybody's talking about it, and, well, I didn't see any result of results of it within the bank.
Uh, so him saying that, uh, myself and Finbarr, who is my business partner, uh, Uh, yeah. Made a presentation also for the board, where we presented also the real results of a GenAI and a discussion with, with the board about the opportunities of GenAI. Uh, and that also really helped them to understand what is JNI. Do we have to be afraid of it? Uh, and. Exactly demystify it a little bit and not just talk, but, like, what? What are we actually doing? Or what could we what could we do for real? Yeah.
And officially, we still have a ban of on ban AI on GenAI within our bank. Uh, although we, we, we do enthusiasts people to use JNI, but we don't have all, let's say, rules in place as we have for traditional software to start with. So we still have a small commission that has to agree and is asking some questions about, well, uh yeah. All the things that we do not have, uh, written down in, the well. But I mean, as a Rabobank customer, I think it's I appreciate that. And it's a good thing that you follow some governance standards and go through, uh, you know, good, good, good process to make sure that and that things are used the right way, the right manner. That's also why it's important to start early, also to discover all these things that you have to arrange within your organization and on contracts on, uh, yeah, that kind of stuff. That kind of stuff. Yeah.
Awesome. So, uh, yeah. Like in the second part of the panel, we can talk a little bit more about, uh, future plans where, uh, the esteemed panel members want to take this or maybe general lessons learned, but let's also see if maybe if there's, uh, you know, one, two, three questions from the room. Are there any questions here from the room that people would ask or would like to ask the panel members? Don't be shy. Yeah. So there's a mix here. A little round of applause for the first person asking the question. Yeah.
In the other room. Yeah. Um, my first question is, with respect to the generative AI integration that you're doing right now with the banks, I know from a Lloyd's perspective as well, and from a federal services perspective, are you I know from the federal services, it's probably an offline execution of, uh, generative AI model from a topic modeling perspective. But more from for the Lloyds Banks perspective. Are you integrating all of that within the Pega Platform or are you taking the data out of the Pega Platform and executing the Generative AI onto some other like say, AWS or Azure. The current the current way. As I say, it's still quite early on in our journey. It's very much within Pega. Um, so it's rule based decisioning, um, and all the models and using the rules in sort of back end tables within Pega, the ultimate plan and ultimate roadmap will be to take that data out of Pega for that into GCP, and then integrate with machine learning models and things like that.
So at the minute, it's all self-contained within the Pega Platform and application. But we do have a roadmap to lift it out GCP and integrate it with machine learning platforms to further optimize. So we've touched on some of the future stuff there as well. But yeah. Okay. How about you with the federal, um, AI services that you're using. Largely the same. Okay. Not quite there yet.
Okay. All right. Thank you. Yeah. I think what you're doing at the moment is mostly, let's say, the left brain AI decision. That type of stuff. Not so much the generative side of the house. Yeah. Okay.
Any more questions? Go for it. What sparked my interest was the usage of the word autonomous in the name of this panel. In my mind, the year 2024 is the year of agents. And the use case is impressive. Described where more on the on the natural language processing side. Do you have any experience with implementing or at least doing a proof of concept of agent system autonomous agent systems at client side? Or have you been looking into that in your more from your, uh, Accenture point of view? Yeah, I think it depends a little on how you're defining agent.
Um, Um, for us, like, I would quibble with that, that we are at the moment, I think, especially looking forward for industry trends as it moves from like software as a service to service as a software, and you expect your Pega to not just be a place where you can perform tasks and log your work, but to help do the tasks and the work for you that we're still like on that first step of that journey. Um, where for us coming in, those allegations, getting processed, um, it used to be a person making that initial assessment of it, and now it's not. And it comes to the person with the outputs, the insights from those models. Um, so it's autonomous in the sense of we're shifting everything to the right of when the person has to start decisioning. Yeah. At risk of adding something, uh, content wise as moderator. But, uh, this, this morning, actually, I hosted a breakout where we were talking about your genetic future, uh, Giving more agency to GenAI not just as a service that you send a request to, but where GenAI you would you would give tools and some goal that it needs to achieve, and it goes off and tries to achieve that goal with those tools. So people watching the recording later on of this session, they can also check out that GenAI session. Yeah.
Thank you. Yeah. But I think GenAI can help optimize the self service, uh, journeys that you have as an organization with Process Mining Process AI. And on the other hand, you have the more futuristic, of course, GenAI agents. Uh, we are also going to start in a small we have a lab version of our app that's only used by a couple of thousand of real customers. And there we are going to start an experiment with a GenAI jet chat without a human in the loop. And that's of course, pretty scary. And that's why we do it in a very confined area. But yeah, we hope slowly to learn how to implement those kind of, uh, yeah.
Autonomous agents or jets. Great question. Thank you. And sorry. And then we're in very much a similar place with Alex in terms of some of the work we are doing, you would sort of say is autonomous. But I think where we where we really want to get to is, is that place where humans are there for when a human is needed. But there will be things that we should be able to use, the data that we have within the system, and the capability of GenAI and adaptive and predictive models to be able to have that sort of autonomy and confidence within the system to drive the processes. Yeah. Thank you.
Thank you. Uh, well, so maybe to, uh, for the final five minutes, uh, maybe just throw out a little challenge here to the panel, maybe one by one. That would be interesting to hear. Like. yeah, either, you know, where do you want to take this in the future? Or do you have some general lessons learned? You know, you would like to, uh, share with the panel? Uh, yeah. Maybe, uh, Jamie, we can start with you first, and we'll go to Alex and then to, uh, to Martin.
Yeah. Happy to. And I think we've touched on certainly from a lessons learned perspective, one of the things that we probably have learned is, I think, I think we went a little bit shot for the stars a little bit, too, you know, we thought we could do everything all at once. Whereas, you know, in reality we need to start using the system. We need the we need the demand going through the system, the data going through the system to be able to then be able to build that confidence in the models and the optimization from the models to deliver that sort of autonomy. So I think we expected by this time we'd have a fully fledged, you know, end to end claims decisioning platform that's integrated with machine learning models. And, you know, no humans ever need to touch the system. and that's not the case. You know, we need to sort of learn to walk before we can run the data and use the capability there.
So get it in a stable place, but then you start to gather data as well, and then you can learn from that. Yeah, exactly. And that's where we want to be. You know, as I said, you know, using further capability within the platform, you know, within the Pega Platform adaptive predictive models to enhance that customer experience for us. Yeah. And Alex is, uh, you're not you're a self-proclaimed not a pure techie, but somewhere in between business and tech, you know, like, uh, that I can imagine that resonates somewhat with you in terms of that, what you don't want to have is an AI lab projects or something like that, or SaaS the director of the AI lab. Sorry. But anyway, so yeah, it's it's. Yeah.
What is your take on that? Like, um, technology first versus business first versus uh, whatever a mix of that. Yeah. So for us, I'm, um, admittedly the glue guy for our team, which I take pride in. So I'm not a hands on keyboard developer, but I can talk Pega with people. I'm not a PhD statistician, but I have a background in that. I'm not the client, but I know their business well enough. Being able to tie all the pieces together kind of helps us make sure that to Jamie's point, we bring things up at a pace that makes sense and we can scale it. So one of our biggest lessons learned was that it really helps for technology agnostic to have an innovation framework where you can make sure that your use cases that you're going after are business led, they're value driven, they're solving a real problem that aligns with the mission of your organization.
So being able to approach it from that perspective, regardless of the technology, has helped us a lot. Um, and having the resources to do that gets a little tricky sometimes. So our fourth pillar that we talk about is being failure tolerant. Fail fast on it. So this implementation that I talked about was great. It helped a lot of people. We have others um, like some automated testing innovation work that we've done that didn't go so well, but we only spent 2 to 3 weeks to find out that it wasn't worth pursuing at a larger scale. Um, so that's kind of our approach to that. Thanks.
Maybe then the final word for you. Uh, like, uh, any general lessons learned or how we actually. Yeah. What could be a more strategic approach? Whatever you would like to, uh. Yeah. So if you look at challenges, you talked about the data already, and of course, we knew that on forehand, but that's still really is a challenge not only getting even if you have the data, getting the data available in the right factorized database. You are more technology than I am is not always that easy. So we also have to ask other departments in the bank to expose their data with API's, but also having the right data.
We also see that we have quite a lot of old knowledge in our knowledge systems. So when we first started with GenAI and you asked the question within the Knowledge Buddy, for example, within Pega, how do I block a credit card? You get an answer from four years ago that's not valid anymore. So you also have to find ways to, uh, optimize your data quality, uh, within the processes that you're implementing now. Yeah. So the data, the content, the integration, not just the smarts in the middle that count. No, no, no. So I think that's still challenging. Uh, I do think there are a lot of opportunities within the customer service area.
I think that's the biggest area to benefit from GenAI. AI, but I'm also very enthusiastic about the abilities that we can offer our engineers to be more efficient in the work that they do, so that they can focus more on the more intellectual things. And having GenAI, yeah, basically work out all the basics. Uh, all the unit tests, uh, that you need, uh, can create test data, etc., etc.. Yeah, yeah. Endless opportunities, endless opportunities. But the most important is do start with these technologies, start experimenting. Start doing something. Yeah.
Because it's going that fast. And before you know. Yeah, you're running behind. Uh, yeah. All right. Well, uh, thank you very much. To the panel, Martin. Uh, Jamie, Alex, for sharing all these wonderful stories. Thank you for attending the panel.
And enjoy. PegaWorld.
Risorsa correlata
Prodotto
La progettazione delle app, rivoluzionataOttimizza la progettazione dei flussi di lavoro, in modo rapido, con la potenza di Pega GenAI Blueprint™. Imposta la tua visione e assisti alle creazione istantanea del tuo flusso di lavoro.