PegaWorld | 46:14
PegaWorld 2025: What the FEC? Boosting the Efficiency of Rabobank's Financial Economic Crime Unit Through Pega GenAI
PegaWorld 2025: What the FEC? Boosting the Efficiency of Rabobank's Financial Economic Crime Unit Through Pega GenAI
So welcome everyone. A very warm welcome to this session, by the way. I think the best title ever saw at PegaWorld. What the FEC? Boosting efficiency at Rabobank financial Economic Crime unit with. You probably guessed it Pega GenAI. Yes. Surprise, surprise. Before we dive in, well, this is just a little continuation of the keynote that you've been listening to just a few moments ago.
Erica introduced a bit of the challenges and how the journey of innovation unfolded for Financial Crime Unit at Rabobank. And we'll dive a little bit deeper into some of the details of how that happened. But before we start a few information, we'll be saving time for Q&A at the end of the session, so make sure to stick around, not your questions down as they come through the presentation, because I'm sure there's going to be a lot of inspiring conversation. Um, and when you have a question, just make sure you come forward to the microphones that are positioned in the aisle so we can hear you and, and have that going. Now with any further ado, my name is Chiara Gelmini. I'm Industry Principal for customer risk and due diligence, everything from crime and KYC related at Pega. And it's my absolute pleasure to introduce you to Marte and Himanshu for them to share their story and journey with Pega GenAI at Rabobank. Guys, welcome on stage. Thank you.
Hey there folks! Today let's dive into the world of analysts. Imagine this mountains of documents, endless reports and data pouring in from every direction. For an analyst, this is just another Monday. They spend hours sifting through information, looking for that one nugget of gold buried in a sea of words. It's like finding a needle in a haystack, and it can be exhausting. But what if I told you there's a game changer in town? The Fach knowledge assistant. This piece of tech wizardry is designed to read those mountains of documents and boil them down to the essentials.
No more endless scrolling. No more getting lost in pages of data. So how does it work? The engine ingest articles and use generative AI to analyze the text, identify key points, and generate a concise summary. Think of it as your personal research assistant, working around the clock to keep you updated. It's all about efficiency and precision. Now let's talk about the impact. With the knowledge assistant in play, analysts can breathe a sigh of relief. Instead of spending hours reading, they get straight to the point.
This means more time for critical thinking, strategy planning, and decision making. I've seen it firsthand. The transformation is remarkable. They can now focus on what they do best analyzing, interpreting, and providing insights. And it's not just about saving time. It's about improving the quality of work with accurate summaries. Analysts can make more informed decisions faster. It's time to embrace the future of analysis. Big applause for everyone who made this possible.
Thank you. Thank you. And welcome to this session and this video. This gave us a short summary about what we built with Knowledge Buddy. And we will tell you everything about this during this presentation. My name is Mart Gombert and I'm not alone in the world. I have a family with my wife and my three kids and a lot of chickens. And 16 years ago I started at Rabobank building the data center and now it's getting a little bit more empty because of all is going to the cloud. And in my private life I make videos.
And that was maybe not a surprise about the first video. And when I combine it with my technology passion, you get videos like the pictures like the Apple campus in San Francisco. It was really nice to be there. And I hope to remind you. I was wondering who made that amazing video, but I know. So with the raise of hands, how many of you have used copilot ChatGPT Gemini in the past one week? Amazing, amazing. So one thing is clear. We all have been using AI now in our everyday work at Rabobank.
We wanted to enable our analysts to use the AI assistant so that they work more efficient and more focused. I'm Himanshu Upadhyaya solution architect by profession and a problem solver by my personal obsession. I live in Netherlands with my wife and a four year old daughter. And outside work. When I'm not designing enterprise wide solutions, you will find me somewhere in the squash court, hitting the ball or running in the woods, or probably running around my daughter in the park. But yeah. What do you want to give a quick sneak peek of the innovations in the bank? Yeah, sure. So in Rabobank, we divide it, let's say, or GenAI our use cases into three parts.
You can have a knowledge assistant, but also you can have summarization. And we see summarization a lot in all the workflows. Look for for example for generation of a summary of the previous case or maybe during the case, or maybe a summary of all a lot of cases within one sector, for example, and also generation. Generation of a report so that an analyst don't have to write it anymore, but he can just check it afterwards and how it all started. So in Rabobank we have around or more than 4000 analysts and they are all working on investigations for CD transaction monitoring, fraud and sanctions. And they all have to go through all the work instructions and policies. And we have an external system within a Rabobank. So they all day they do all top search for some knowledge and then they go back to the workflow. So we thought, no, that's that's not the right workflow for the analyst.
So we thought about the chatbot. And so we started last year and before I think especially banks maybe you also had a ban on GenAI using GenAI. So we also had it. And as soon the ban was lifted we thought let's start experimenting with GenAI. And so this was, uh, this is what we've done with Knowledge Buddy. So Knowledge Buddy we, we formed a team and we asked them, can you make something like a PLC in two weeks with Knowledge Buddy? And it was possible and, uh, it was, I think with 4 or 5 people, they built it very hard. And we had it, uh, in a PLC in two weeks. And then I asked the team, can you build it maybe also into the workflow because it was standalone, and then in another two weeks and I invite people from the business and you can do a presentation and you can see what their reaction is if you bring this also live.
And it was really funny on the 14th of July, I will never forget Alan Trefler was in the Netherlands and I planned also the presentation on that day, and Alan came by and he was sitting front row and we did a presentation in four weeks about Knowledge Buddy implemented in the workflow. And then we got so many positive responses from the business and we said, okay, we will make this to production. So it took us about six months and we brought it live to 150 people. And later on. And now in in May or in April we brought it live to 3000 analysts. But it was not that easy of course, because Knowledge Buddy was new and we didn't know was Knowledge Buddy now the right product? So what we did, we built it twice. So we did the track with Rabobank, its own chatbot. So we integrated that also into Pega, and we use Knowledge Buddy because we didn't know where was the product of Knowledge Buddy going.
And we built for example, already very fast conversational AI. So you can ask keep asking questions after you receive an answer. But Knowledge Buddy was not able to do that. So of course Pega also maybe learned from us. So in the next release it was there And it's funny, I have a screenshot about a B testing. So this is what we brought live for 15 analysts and they could do a B testing. So there were really two buttons A and B. And they can ask it to A or B. And we can see the results about the feedback what the analyst gave to system E or system B and what what did we learn from this when we went live and when we were in this journey.
So we received a lot of feedback from the analyst, and not only in the system because we you see a demo later and then you see the feedback mechanism, but also in real life we received a lot of feedback, and also because we started with experimenting with a GenAI use case, we get a lot of traction from the business. So first we went to the business. Do you want to have this? And now the business comes to us, hey, we see you can do this. Hey, maybe you can also do this dish, and it was also a strong fundamental to develop further on. So we built two solutions. So we still have both solutions. And the other solution we use for let's say the non Pega applications within the bank. And also by doing it twice, it's of course we had a lot of engineers that are building on GenAI and everybody wants to build on GenAI.
So that was a very great journey to do and remind you. Can you maybe give a short demo on how we did it? Yeah, sure. Thank you. So before we go into the architecture stuff and the demo part, let's just understand why. Knowledge assistant. So before knowledge assistant analyst, the fact analysts were logging into the workflow application. And at the same time, if they have to browse through different work instructions, they used to log into Delphi, if they have to browse through policies, they have to log into SharePoint. Right.
And many other applications. But after Knowledge Assistant, this changed drastically because we embedded an AI assistant within the workflow application itself. And all analysts had to do was just ask the question to this AI assistant and AI assistant. Using the semantic search analysis used to would bring the most relevant answer to the question, be it work instructions, be it policies or any other data. Okay, but I'll not let you wait any longer. Let's go to the demo part. So this is our application. And we have embedded the AI assistant as part of our customer review journey. You just have to click it on the right and then select what type of profile you want to have.
Business client sanctions onboarding and different kinds of profiles. And then start typing your question. For the demo sake, we just start a simple question how would you reach out to a customer? Or basically how do you do it, perform an outreach. And yeah, I'm a bit faster than the video, but okay, you just click on ask and it would just go and do the semantic search analysis and then bring you the results with a step by step instructions of how would you reach out to a customer? And if you see at the bottom you also see different references. And if you click on one of those references, you would eventually see the article which was used to frame that particular answer. So we are giving the capability to the analysts to go and validate the answer within the application itself, so that they are not surprised that, oh, what the answer, what is this answer? And then if they like the answer, we enable them to give a feedback and put in whether the answer was good, helpful for them, not helpful for them.
And all these feedbacks are recorded within our application, which we use later on for our analysis to understand whether we have to change something in terms of prompts, or we have to change something in terms of chunking strategies, to make sure that we give the most relevant response to the analysts. Okay. I want to ask everyone, I mean, who knows here about Rag with a raise of hand. We have a pretty smart audience here. You want to explain? No, no, please go ahead. Yeah. Okay. So Lag stands for retrieval augmented generation.
As the name says, it has two parts retrieval and generation, where retrieval used to get some data out of the knowledge base. And then the generation uses that data passes on to the LLM and gives you the most relevant response based on all the data collected during the retrieval itself. And for AI assistant, we use a Lag architecture to give the most relevant response to our analysts itself in the application. But let's understand it. I mean, we use Pega Knowledge Buddy for this enabling lag within the application as well. So we have different applications. CDH TM fraud sanctions. And we created one AI assistant module. And all these applications can build upon this module itself.
And analysts from these different applications ask a question. The question is passed on via the API gateway to Pega Knowledge Buddy. We have created a fake body which eventually retrieves different embeddings based on the question asked with some similarity score, which you can define. I'll explain that later on that you can define a minimum similarity score, and then it retrieves different embeddings. And those text embeddings, all the chunks we call it are sent with the question and the instructions to LLM. And LLM uses these all this information to give you the most relevant response, which is passed on as answer to the users in the application itself. Okay, but before we go ahead, let's understand. So one of the key design principle for us while we were building this solution was that we do not want to, you know, build it again and again for each and every application within Rabobank. So what we did, we created an AI assistant module and that just works plug and play.
So irrespective of any application you just plug this module with the application itself, put it as a built on and you can just reuse it anywhere within our tribe or other types as well. So this helps us to keep one unified user interface. We don't have to make the changes to UI again and again. And the governance was super easy because we do not have to maintain it by different teams, different applications, different areas. It's just one team is maintaining the whole solution itself. Okay. Now data is a key for any GenAI implementation, right? So like I explained I mean we have a workflow sorry work instructions applications Delphi. And we use this work instructions applications to give us the data.
So all these work instructions are fetched and then inserted into the vector database. But before inserting we do some pre-processing on this data. Make sure that the data is in the right order. Make sure that all the attributes are correct in the data and all the attributes that we need. And then we start ingesting this data. We put different chunks. And then all these chunks are ingested using Pega out of the box APIs itself into the vector store. But okay, how do we do it? Right?
I mean, it should be. You need to understand how do we do it in Pega as well? So when we start with it, we created two different case types. One is article listing, other one is the ingestion cases. Article listing basically takes care of all your scheduling. I mean you do not want to, you know, run it manually every time. So we schedule it. We define which data source is it, when, when and till what date the articles has to be fetched and when the scheduling has to be done. It runs and then it later on creates the ingestion cases which in turn ingest different chunks into the vector store.
So if you see on the right image, I mean the ones marked in red are different chunks which are being inserted into the vector store and chunks are nothing but, you know, smaller sets of data for efficient processing. You convert a big, large piece of text into smaller texts and insert them into the vector store. Okay, you must be thinking this guy is asking about all things architecture, but he is not explaining how it's done, how it works. So basically this rule is the key. So you create Knowledge Buddy rule in Pega and with the Knowledge Buddy rule itself, it's as simple as creating a new case type, right? You define a who would use the who. You can define different access roles on the manage, use and view. But and then once that is done this is the key configuration which you would do in the in the Knowledge Buddy rule itself. You start by putting in different instruction sets, and instructions are nothing but just the prompts, right?
And Pega already gives you certain out of the box instructions. Something simple as you know, as simple as that answer in the same language in which the question was asked. And if you need, you can add more on top of it. And after that, you connect this Knowledge Buddy with a data source. You define what your chunking strategy would be, whether you want to chunk these articles by title or you by chunk it by size, you define the right size of the chunks. You define how many, how many chunks would be retrieved from the vector store? And last but not the least, you have to define the minimum similarity score. So let's say in this case, if I give a minimum similarity score of 80, that means all the articles who are below 80 will not be fetched from the vector store, which is a key because you need to the more similarity score you have on these chunks better your result is. But make sure that you do not extend it so much that you would not even retrieve any articles from the vector store.
And once you have this step, you can just play around with it and then do it iteratively based on the feedbacks received, so that you understand that, okay, this is the right set of instructions. And this is the best chunking strategy which works for us. And then just start asking questions for Knowledge Buddy OK mart. I told everything about solution, but maybe quality. Yes, yes, quality is also very important, especially in GenAI. And first the question are there maybe partners in the room that are specialized on quality assurance. I see some hands. Great. This is for you.
Because I think we did not the best thing here yet. So I hope to get get some. Also some advice from some experts in the room. So first quality assurance. The first thing that we do is of course the analyst. They guarantee that the quality is right because they ask the question, they see the answer and they mostly see the answer is not correct. Because the first question that we asked is how do you remove a sales block? And the answer was how to place a sales block. So the analyst of course knows the answer is not correct.
So they can give us feedback and we can train the model and it can better be better and better. Also what we did is sample testing. So we get all the questions and answers. We get it in Pega insights. And there we can see what is the what is. The analysts asked what was the answer that's created and is it correct? So every, let's say, Dutch speaking person that knows the that can read the work instruction, they can see, ah, this is the right answer on the question that is asked. But then also we have automated testing and we built by our own. We built a framework that we have, let's say the ground truth placed in.
And we have thousand validated questions and answers. And so what we do is we generate the responses. For example, if we do an upgrade of the Pega system or we go to another LLM, then we do an automated test and we can see if it's, let's say, the same or almost the same, and if it's, for example, a score of 95%, then we say, hey, there's something wrong. Uh, also the dashboards where I spoke about we have a dashboard out of the box from Pega. It's in Pega insights. You can see how many questions are asked by the analyst, what was the feedback score, etc. and this helps us to monitor the performance and the usage of the user. But everybody is maybe asking but how to measure success? Because we did this use case to reduce the manual work for the analyst and to speed up their investigations.
So what we do is we have another system. It's called Next Thing, and we use it to monitor how the what are the analyst using on their workplace. So we can see how long analyst. And this is completely anonymized. But we can see how many times people are spending in the work instruction application. It's called Delphi. So we keep monitoring that and it should reduce. But then also analysts are using the knowledge assistant. So we have their solution.
It's called PiS. And there we can see how many times or how long are they staying in the chat window. So if they are staying there for one minute. We do it together with the time spent in Delphi, and then you can calculate the time that it is saved. But we are not doing only Knowledge Buddy dimension. No, we are Rabobank. We do a lot of innovation. So yeah. So in, uh, we have conducted a lot of hackathons in the past one year, and here are a few results, which we have and we want to present here as well.
And because this would be interesting for most of you. So what we have done is we have done some summarization use case wherein within the application itself, again, you know, we have used GenAI Coach to answer questions automatically or to answer the follow up investigation questions automatically. You just click a button and then it will go through all your previous historical interactions. And using that I mean, you know, it will generate the most relevant answer for that question and not that, in fact, you know, you can even ask any questions on whatever the documents have been uploaded. And then it would give you certain answers based going through all the documents attached to the case. And yeah. So this was the first one. Second one. And this is interesting for Agentic I, I, I think we conducted a hackathon 3 or 4 months ago.
Yeah. And uh, in this use case, we have passed on an alert. And this. So just a Jason, we have passed on an alert. And the information in this alert, something like device ID, country ID is used, and we use the GenAI capabilities to actually analyze this particular alert and then give us a trust score. So while we pass on the alert to this case, it is generating and searching for with all the historical transactions done by this customer, where one could be a victim, other one could be a beneficiary, and then it would provide a trust score, or it could be red, green or amber. And based on that specific trust score you see here based on a trust score. And it also gives you the reasoning that why has this created I mean, the device ID or the country ID didn't match for this particular transaction. So that's why it gives you a trust score.
And based on that trust score, we use Agentic AI capabilities to orchestrate the next action. And so what it has done is it has it would it has created a case ID here. And that case ID has been assigned to the same person or to an analyst whom we wanted. So the GenAI part was used to do the analysis of the incoming alert. And the orchestration was done by the AI capabilities. And yeah, and we did recently in the last version of Pega 25.1, I think it's not yet released. If it okay, we worked with it with a small hackathon and you see it in the Innovation Hub, in the booth with the Agentic AI booth, and we already had, uh, some play around with it. So here you can see what we want to do is an analyst is looking a lot into Google, for example, I had to get a client profile, etc. so what we did with Agentic AI in 25.1, we asked.
We made the profile, so what should we want to see from a client? And that you see here in the screenshot we get some data source. So from for example Google. But we are also looking into validated data sources but also internal data. And and then you can for example ask what is Rabobank in Utrecht. What's that kind of company. And then you get a lot of results about Rabobank what is their address. And we are thinking that that was not working back then, but now it's working. You can also add pictures to it, for example Google Maps picture or maybe even a Google Street View picture that you see immediately the the company that you are investigating.
Um, so that brings us to almost the end of the, of the presentation. Mutation. And we want to provide you with some key takeaways. So what did we learn? We learned that we need to fail fast. And how do you fail fast? By experimenting. And if it's not the right choice or whatever, just throw it away. Don't be afraid and start something else and learn faster.
And also we did hackathons and, um, that's, uh. Yeah, we can even build something during playing a FIFA game, you know. Then playing PlayStation is not that bad, you know? I mean, it helps you. But in Rabobank, we we do around one hackathon in a quarter of 24 hours or sometimes even 36 hours, and then we get a lot of teams, they can subscribe, they can work on anything that they want. And let's see if it comes to a solution. And it brings real solutions within the bank. But this is not everything of course. So we are talking we are both tech guys.
We're talking a lot about technology. But user adoption, that's the most important thing that you need to consider. But you can push all technology to your business. But if they can't adapt, it will fail. And so we we we also saw that at the beginning. So but we built trust trust go to the location. So you see here a location in the north of the Netherlands. We went to there. We visited analyst.
And we talk about what is DNI. And the first time that we went there, they said, ah, I can find another job. Or they were a little bit angry because we were talking about DNI that maybe can replace their job. But after we were sharing that, we will make their job more interesting than they thought. Oh that's great. And now they come to us asking, can we help? And even there are this is really funny. There are tech guys as analysts and they never had the chance to be, let's say, an IT colleague. And they now come to us.
Can we build GenAI, can we help with you and also involve them with what you do. So do a POC fast with a small group. Learn from it and also add knowledge. Knowledge is very important because sometimes they don't know what it really is. So if we explain it, then they also understand it better and they see that it's really making their life interesting and phased rollouts. That's the last one. That's very important. Don't try to do it over 3000 people in one time. So we started small with 15 people in around September last year.
Then to November we were at 150 people and then we saw a book, but there are 150 people. They only had that book. And so we changed it. And then after three months later, we had 3000 people live. In fact, these people helped us a lot in testing our application rigorously. Yeah. That's also yeah. So thank you for your attention. And this was the end of the presentation.
Ocean. Well, guys. Thank you. This was really, really interesting. And for myself, you know, everyone is talking about GenAI. Everyone is sort of experimenting with it. But you guys did it. It did it in a short amount of time with the technology that was reliable and allowed you to do it fast. So that's great to hear.
Well, with that, I think we can open for questions. So don't be shy. Just walk to the microphone and ask. I know there's some. There you go. The first brave ones queue up. I do, just in colliery sunlife. Great presentation. Thanks.
You talked about pre-processing the documents and cleaning the documentation before you loaded into your vector store. Did you do that in Pega or did you do that outside of Pega? No. I mean, we loaded The. These are all the articles into Pega vector store itself. So it comes out of the box with the product itself. Knowledge Buddy product. But did you change the articles before you loaded them into the vector? No, we validated the articles because, you know, when these articles are part of another application and you know, we are not maintaining them.
So we need to validate whether what all whatever attributes we need to be in the response are also part of the articles itself. So we during the pre-processing and that's why the pre-processing step was important for us. We did validate whether all these attributes are part of the articles. If not, then we have to go to the application team and say that, you know, you have to update or refine your articles and then ingested it. Okay, one more question because I'm greedy and you also then say that you connected it to Delphi, you SharePoint and so on. How did you manage? How did that connection work? It comes I mean it's simple. It comes with out-of-the-box capabilities itself.
I mean, we we use, uh, the Pega out-of-the-box APIs to connect to the vector store and. Yeah, for SharePoint to load the policies, there's already I forgot the name, but there's already a rule which you can use as is to connect. And it's good to know. Delphi. There was no API when we started, so because we want to load it very fast and every day we do a refresh, we ask them to build an API. But the first I think the first two months we, we built, we used a web scraping. So just web scraping load it into the vector store. And that was our first POC. Okay.
Brilliant. Thank you. You're welcome. Hey so my question is how are you protecting the customer information being exposed to the AI, especially the PII information as well as the non PII, because there could be many, uh, such information as part of the case, which we don't want AI to be exposed of and how that is protected. Yeah. Good question. And because that's really important for us as a bank that we deal with PII information on a good way. So what we currently do, and that's also the reason why we started with Knowledge Assistant. There's no PII data involved.
So we had to do a long journey to get everything approved because it was GenAI within the bank. But later on we didn't add any PII data. So it was a little bit easier than that. We started with a use case with PII data, but now we start with PII data and for example summarization. What we do is we mask it before we send it to the LLM. So we remove the names, the events and everything. We remove it from the text that we send to LLM. And also we made contracts with, for example, Pega, but also with OpenAI that they don't use any data of the Rabobank to train the models, but also not to use it for monitoring abuse, etc.. So the data will never leave, let's say the premises of the Rabobank.
All right. Thank you. You're welcome. I saw a slide where you mentioned the the time saved. Uh, when analysts are looking at, uh, at Delphi. Yes. Or the Knowledge Buddy. Right. Uh, why were they still looking at, uh, at at Delphi when they can use the Knowledge Buddy as well?
Because in, in the Knowledge Buddy, within the workflow, we only present a summary. So if, for example, a junior analyst, maybe they want to see also some more text so they can click on Delphi on the link. We always place a reference link in the summary. They can click on it and then they see the full text in Delphi. And then they still spend time in Delphi. But also there can be analysts that don't use the Knowledge Buddy and go directly to Delphi. So we want to see how many times, how much time is spent in Delphi. So you could gain more efficiency even, right? Yeah, yeah, yeah.
It's also a matter of trust, right? I mean, I mean, when you start with a new application and you give something to the users, it takes time to build that trust. I mean, you know, in the in the first four weeks, they were like, okay, what is this? Right. And then in the next six months now they're like, oh, this is helpful. And then we got the feedbacks like we are. I mean, when we did the alpha alpha testing, we were like, okay, you know, we are actually missing that feature. Why have you disabled it? So it just takes time to build that trust.
And then slowly and slowly they are getting used to it. Hi Matt and Manju, great session. Thank you. My question is on the data sources like how do you manage the data sources when you're getting the data from the SharePoint and Delphi for example. Like is it one unified data sources or you have different data sources? It actually depends on you. I mean, you know how you want to split those. I mean, for us, it was, you know, we wanted to have two different sources, one for work instructions and one for policies. But depending on what the use case you have, you can define whether you want to split it into two data sources or one.
And but for Knowledge Buddy, you know, connected to Knowledge Buddy, you can even connect to data sources. That's not a problem. But yeah, it is totally depending on how you want to. Yeah. Based on your experience. Just a follow up question. Based on your experience, how much of it was out of the box and how much of it you had to do, maybe customization or something? Uh, we did not do much of customization. I think already more than 80% was out of the box.
It's just you need to before before the implementation. You what you need to do. You need to have a think about the security aspects. You need to think about what all APIs you need. How would the data flow in. So more than more than the Knowledge Buddy, you know, we wanted to understand how from our workflow application work instructions application Delphi, we have to get the data. We need to know that if they expose the APIs to get the data into the vector store. But once the data is in, I mean, these APIs are available out of the box. It comes with the product.
Okay. Excellent. Thank you. You're welcome. No I'm not going to discuss testing now, so no worries. Just a short just a short question on the tool itself. Um, you provide an answer, an AI generated answer. Do you also provide references to the original documentation, for example? Yes.
So people actually reference whether it's correct or not. Yeah. That's what you also saw in the demo. So you click on the reference link. And then you also always go to the original source. So you can find the original resources. Okay. Yeah. Cool.
Nice. And there are also because they can go to the original source, they can give feedback. And the feedback is handled by our team. And they can send it for example to the work instruction department. They can change the work instruction in the system, and then the next day we will load it again into the system, and then they get the new answer. Anyone else? Hi. Thanks very much for sharing. Great session.
I'm just interested in the. So with the Knowledge Buddy, you've allowed the analysts to effectively to query their operating instructions and know the the procedures they must go through. Any thoughts on the next steps around that and whether you can then take that to build the rules for, uh, you know, for automating that process and the workflow, uh, even further. Yes. Yes. What we are thinking is because a lot of times analysts are asking, for example, what do I need to do here? But most of the time we can also automate that step. So it will be great. Maybe in the next version that, for example, the example that I shared about the sales blocks sales block?
How do I put the sales blocks on this customer that you get immediately already a button where an API is behind that. They can put the sales blocks and not go to the CRM system and do it by themselves. At that kind of automations we are thinking about now, but also to combine the case data with the Knowledge Buddy data. So combined Coach with Knowledge Buddy that. So now an analyst is asking, what do I need to do in this step? But we already know in which step he is, so that intelligence is not yet now in the Knowledge Buddy that we use, but I think that will be in the future. Anyone? Come on. I see someone debating over there.
No, don't feel like. So maybe I ask a question now, if I may. Yeah. I was interested in asking you how you see this technology evolving with catching new trends, new criminal trends. Yeah, yeah. That's a that's a good question. So because with Gen I, I think it will be possible to um, let's say also see more data. So what we for example now do an analyst is working on a case for one particular client, one investigation, but he is not seeing what his neighbor is doing or the next neighbor. Sometimes they have a team daily and they talk about what they are doing.
But we have so many analysts. So to see what all the analysts are doing, we don't know. So how can we make, for example, trends out of the investigations that all the analysts are doing? So maybe there's something going on with car dealers or whatever that we don't know because all the analysts are doing an investigation by themselves. So that's also next steps where we are thinking about now to see more on a broader landscape instead of helping only one analyst. Instead of Silas. Yeah. Sorry. Go on.
Thank you. Great. Great presentation. I have a question around the the regression tests that you implemented and kind of like just on frameworks, how you integrated that into your CI, CD pipelines and making sure that it's running on every deployment, like. Yeah, on the automation side. Yeah. The automated regression testing is the like where you're checking prompts and then responses just to see that they haven't changed. Yeah. So so I mean we all know that, you know, when you ask a question to Knowledge Buddy, the response is not always the same.
And that makes your automation testing difficult every time. So what we did I mean we created a set of, you know, 1000 or some sample questions and then recorded the answers, what we expect from from the knowledge assistant itself and then validated those answers with the users. Now then what we did, I mean, anytime you wanted to perform your automation testing, you ask these questions again, you would receive some response. So you use that response, passes on to the LLM again, and then ask them to match the recorded answer versus this response. And what's the similarity score between them? And if that similarity score is pretty high, then you know, your you know, your chunking strategy and prompts are still good to go. But if you see that those responses are dropping or that similarity score is dropping, then you have a problem. Then you need to think of, okay, I mean, what has changed? You know, do I need to update something in the instructions?
Anything has to be updated into chunking strategy or anything. Yeah, but that's how you would know. And it's like, you know, you use LLM to do the validation for yourself. Okay. So do you have that automated at all or is that a manual process that you check each time? Uh, no. We, we, we have scheduled it. And I mean, we run it manually at the moment because we have not embedded it in the pipeline yet. But that's the next steps.
We are doing. Okay. Thank you very much. Yeah. And we built this framework by ourselves By our engineers at Rabobank. But probably, I think Pega maybe Peter Yu knows about it. Maybe this can also be out of the box right in Pega later on. Directly in the backlog. Good.
Thank you. That's a great presentation. A couple of questions. How do you decide which model to use? So did you play with multiple models and decide on this is the best model. And the follow up on that is you know, do you have any issues with the buddy performance like it is taking more time? Any challenges like that that you have faced? So to answer your first question, I mean, we started we started with the OpenAI GPT four model itself. And then I mean, both our streams, I mean, the the other stream, which Matt mentioned right in the Rabobank itself.
And with Pega, we used the same model and yeah, we did not. I mean, the other stream evaluated other models and then they they said it is the one which is working for us. So we did not play around. And back then, you know, when we started with the product, I mean, the integration was only with GPT four or maybe I. Think the first pilot we did with 3.5, right. Yeah, 3.5. No, but but then it was OpenAI only. I mean, it was not that. Yeah.
It was not any other one. Yeah. And then later on came the capability that, you know, per Knowledge Buddy, you can actually change the models. So now now we are doing some POCs if we can change these models and then see if the results vary or not. Okay. Can you also touch on the second question that I have on the performance. I cannot. Sorry. The second question that I asked on the performance of the Knowledge Buddy like, have you any faced that took a lot of time to return the response?
So far, so far, so good. I mean, we have not I mean, performance testing is still yet to go, but we have not. I mean, so far we see the performance pretty good. I mean, it is not taking even minutes to get the answers. So it's just a couple of seconds that you get the answers. Yeah. Okay. That's great to hear. Thank you.
Yeah. In the demo it was a little bit speed up. So there it was I think one second. Yeah. It's 4 or 5 seconds. Uh, when an analyst ask a question. Very last question. Thank you very much. It's a nice presentation.
Quick question like this. Knowledge Buddy. You have to enable at the platform level or at an application level. It's another platform. So it's also running in Pega Cloud. So we have a new stack and there it's running. And so from all let's say case management environments or other stacks we connect with an API to that new platform. So in that scenario like each application has its own protocols, as somebody was saying PII or let's say GXP or a Sox compliance, some information we don't want to share with other application SMEs or the users. So how do you restrict while if the Knowledge Buddy if Buddy.
If I'm trying to use for one application information, that person should not ask or get the insight from the other application. How do you segregate those. That's already segregated? I mean, we, the AI assistant, is just applications are built on top of this module, but the information is not within the AI module itself. It just takes in the information via the API call, passes it on and sends a response. And if you if you don't want to share any information or if you want to mask some information, you do it in the application itself. If you need. Mask, I understand I'm talking about two different applications I have on the same platform, one platform. So I don't want to share the, you know, somebody should not ask an inquiry about the other application.
They should be focusing on their application itself. Then. Then I mean you can so you can do it in uh, you can have different Knowledge Buddy which use and then you, you enable, you know, Knowledge Buddy as per the role, you can do the segregation on that side and. Yeah, then then it's possible. Or you segregate it on the data source level that, you know, you create two different data sources and use those data sources as and when with the right access rules to it. Understood. Thank you very much. Well, I knew it was going to be a very packed question session here, but I hate to close up. Um, thank you everyone for your time.
I hope this was informative and exciting as it was for me. If you are passionate about, think crime and due diligence and everything around this realm, be sure to head over to the lunchroom, grab your lunch and then you will find tables with a sign on with the fin crime and KYC reading on, and you will have the opportunity to exchange with your peers. Have a nice discussion. See where Agentic is taking us forward. So thank you so much once again and have a great rest of the day. Thanks for everyone.
Weiteres Informationsmaterial
Produkt
App-Design völlig neu gedachtOptimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.