Pular para o conteúdo principal

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 47:04

PegaWorld 2025: Exploring the Future of Agentic AI - and Beyond

AI is gaining more autonomy, and Agentic AI is a prime example of that. Pega is uniquely positioned to empower agents with an extensive kit of processes and tools, ensuring their operations are effective, governed, and transparent. Join this AI Lab session to explore the future of Agentic AI and boldly go where no one has gone before.

PegaWorld 2025: Exploring the Future of Agentic AI - and Beyond

My name is Peter van der Putten. I'm Pega, lead scientist and the director of the AI lab, and it's great to see you in such great numbers here today, where I'm going to talk about the present and the future of AI and beyond, even, um, but before I do that, let me let me explain to you a bit. Of course, a genetic AI is a very, very exciting topic, potentially very powerful technology. But I also want to demystify a genetic AI a little bit. To many of you, it might be something that comes out of out of nowhere. But actually, you know, it stands in a tradition of evolution of different forms of of AI. And actually, what I think to make a genetic AI really sing, and we really need to connect a genetic AI to other forms of AI and ultimately also to have responsible and governed impact. We need to connect the genetic AI to automation, right? So that's what I'm going to talk about here today.

I'll start it off maybe with like a bit of a broader overview of of the, you know, our vision of AI and the field and how it's developing. And then I'll zoom in on this whole topic of, you know, where is Agentic AI actually coming from and how does it fit into an evolution of these other fields of, of AI like, uh, generative AI? Yeah. So I'm known for many things. I'm not known for bringing small slide decks. Right. So I'm going to jump straight in and it's going to be a little bit of a roller coaster, but I hope you all enjoy it. Um, okay, so let's get started. So yeah, I and nowadays it's amazing what I can do.

We can just type in a little prompt saying, I want to have a video of a snow monkey playing with a sailing boat. And it would just generate the video. Um, and then people go, of course. Yeah, yeah. Okay. These models have been trained on monkeys taking a bath in a steam bath, but obviously probably never. The AI has been trained on koala surfing the waves, and even then the AI is able to kind of generate generate this type of stuff. And this is an example from meta. About two weeks ago, Google launched V3 and it's a, you know, similar, uh, big tech companies are developing these these technologies.

It's pretty amazing. And this might be a little bit of a fun application, but we can go to more serious applications before going to business. Let's maybe focus on science. Um, and I have a question here. Like who won the most recent Nobel Prizes for AI in science? So does anyone have an idea who won the most? Yeah, yeah, yeah. That is correct. So it's a bit of a trick question.

Why is it a trick question? Well, it's a trick question because there is no bell. There is no Nobel Prize for for AI. So we had to steal them from the other fields. So one of them is the Nobel Prize for physics that went to Geoffrey Hinton, John Hopfield, and primarily for, um, promoting, uh, different types of neural networks, these kind of machine learning systems that are loosely inspired about by how the brain works. And then indeed, Demis Hassabis of Google DeepMind, together with his colleague John Jumper and David Baker, received the Nobel Prize for chemistry, primarily for the work of, um, predicting 3D structure of proteins. And you go like, okay, you know, it sounds interesting, but how important is that? Well, you know, all life is, you know, we're all living systems are based on DNA. DNA and RNA, they they code for sequences of amino acids, these amino acids out of such a sequence that that kind of codes for a protein.

And if you can predict the 3D structure of a protein just from the sequence of amino acids, you can kind of all get already get a hunch what that protein is going to do. So all life is governed by DNA and proteins. And the missing link here is to go from the sequence to the actual structure and function of the protein. And they're using AI for that. So it's pretty pretty pretty important. Yeah. So it's not just about creating those fun videos or doing science in business. The expectations are also sky high. And for example, Gartner predicts that by 2026 I will cut labor costs by 80 billion.

And so the million dollar question then, or maybe the billion dollar question is, um, yeah, what kind of AI are we going to need for this? Yeah. And when you think of that example of that video model, you know, on one side it's pretty impressive what what AI can do. But when you think of it, yeah, there are also a bit of, let's say a passive couch potato. Yeah. What what do we mean with that? In that example we have to feed these these these models millions and millions of hours of, of of videos for them to learn something. And then they will only respond to very specific instructions that we give to them. Give me this video of a snow monkey taking a bath.

Right. So it's very expensive because we need to spoon feed them all, all that data and all that knowledge. And they're also quite passive. You know, we just give them a water and they give us a response and that's it. And I think what's needed for AI to really have this big impact that we need to turn AI from a passive service more into an active service. And for that, we can actually go back to some of the early roots of AI. Here we have Norbert Wiener sitting in his MIT office in 1949 with his own autonomous car. And Norbert Wiener is one of the things he's known for. He's the, you know, the grandfather of of cybernetics.

So this whole idea about that, we need to have intelligent systems that are very action focused. Actually, they're they're operating in a complex environment that dynamically changes all the time. They can sense the environment. They have very simple brains. But those brains kind of decide what kind of action to take. And they can also gather their own data, gather their own feedback and learn from it. See, did we do the right action or not. Right. So these are that's a different, different approach a different way of thinking of making intelligent systems.

I think that's also required to to to really go from I as a passive service more to I as an active agent that will unlock the autonomous enterprise. Yeah. So and for that we actually need all kinds of AI. Yeah. So I often talk about left brain and right brain AI. Right brain AI is maybe that creative generative AI, but we also need connected to the left brain AI where we're doing making rational, optimal decisions. Or at least we're fooling ourselves that we're doing that right. So we're making making plans. We're sensing the environment.

We're making predictions, turning it into decisions. We're memorizing stuff and we're learning from it. And then combining that with the creative brain and ultimately those two forms, forms of AI that we need to connect that to automation as well. And only then we can uh, yeah, really implement that, that endless cybernetic cycle, that feedback loop where we're sensing information, we're turning that into Who ultimately predictions, decisions, maybe responses. Um, and we're then connecting that to automation to take the actions and then observe. Did we do the right thing and can we learn from that and closing that particular loop. Yeah. And that will ultimately, like I said, unlock the autonomous enterprise. Um, now let's go back to Agentic AI then.

Uh, so so what's happening here? I have this cute little picture. This is a fairy tale of the frog prince. You know, where the princess kisses the frog and the frog turns into it? Turns into a prince? Um, and you might recognize the frog face here, because that's Clippy. Yeah. So for some of you in the audience may remember Clippy, a not so successful agent application from the 90s. Yeah.

This annoying little paperclip that we just, uh, jump up in your screen trying to help you with different types of tasks, but it couldn't really do it because it's such an open environment. You know, a knowledge worker, you know, like just working, working in their office applications that it was more annoying than than actually helpful. And why am I using the Frog Prince picture here? Is that in a way, generative AI has kind of, um, yeah, awoken this whole field of agent based systems because we can use the generative AI essentially to sense context, understand what goals are in a particular situation based on user requests and all the other things that we know, and turn it into plans to take particular actions and use the creative powers for that and see how well we're proceeding with our goals. And then ultimately, um, reach, reach, reach a conclusion, uh, to, to a problem. And that's kind of the, the agentic thinking. And the LMS, uh, they, they actually are the perfect tool in that sense, to, to really breathe new life in this field of agent based systems. Um, now and if you hear all of that, you will say, well, you, you hear also a company saying, oh, you know, just get thousands of agents like Alan said this morning and, you know, all will be just fine. So what do you do?

Um, if you don't really kind of trust that that particular statement as an AI guy, you, you create a benchmark. So these people here, they created the benchmark of a simulated company with simulated workers that have particular tasks. And you can forget about all the details on this slide. Uh, if I just focus on this particular column, you see that even the best models in the world at the moment, they they can only solve 24%, uh, of of tasks. Yeah. So and it's way less for some of those, uh, those other models. Now, this is a little bit of an exaggerated benchmark because they, they did a very naive thing. Just create tons of agents and then hope and pray that you know that this will work. That's not obviously not not the way to go about it.

So how how should we then think about Agentic? AI, you know, is a genetic AI. Should we see maybe a little bit like the druids? You know, it's. It's all about wisdom. Yeah. Or. Or is it even going beyond that? We can say we can take that wisdom and we can make it more action oriented.

We make it a warrior. Right. Um, who can also take action based on on the wisdom or should we maybe see the agentic I more like a page, you know, like or in a business context, a helpful intern that could help us, uh, try to solve some particular tasks? Or is it ultimately a bunch of orcs that are going to rip your enterprise apart? Yeah. So what is really, uh, how should we really see the agentic? Uh, I here and I think you can kind of sense, you know, which of the four I kind of identify with the most. But, uh, you can pick your own. You can pick your own character here.

Um, now, like I said, let me kind of demystify this a little bit. Um, and what I want to talk about is how how this kind of where, where is this agentic AI I technology and. Yeah, um, actually kind of coming from. Right. Because it's not coming out of nowhere. It's not really a genetic revolution. It's more of a genetic evolution. Um, so that's what I'm going to focus on in this particular section. And after that I'll show you.

Well, as part of this section, I'll show you a lot of examples of, of real genetic demos. If you would go into the tech pavilion, uh, every single booth will have a genetic aspects. If you find one booth that doesn't have any, just let me know and ask them, like, why don't you have an genetic demo? Right. Um, so but pretty much all of the demos that you see, see there, they have some form of an authentic genetic angle. But but there is kind of a natural evolution where this is coming from. It's not coming out of nowhere. Um, and let's, let's maybe go back a couple of centuries in AI times six months, um, to January of this year. So see this post here from Andrew Ng.

And this was when Deep Sea came out with their new model. And everyone was like, oh, you know what happened? Five Chinese Boy Scouts created a new model for, you know, 100 bucks, and now it's beating Open API. Yeah. Now, of course, that wasn't really true. If you look at the paper that was published by the Deep Sea team, there's around 200 authors associated with that particular paper. So that whole framing was a little bit off. They also spent hundreds of hundreds of millions of dollars on Nvidia software hardware. Sorry, but the main point that Andrew makes here is that, you know, GenAI is not just about that ecosystem of GenAI services, actually, that's a market that's being commoditized really, really quickly where there's lots of competition and you get better models that are cheaper or faster or you name it, the real important place in this market is what are you going to do with that intelligence, and how can you put it into a workflow, into an interaction so that it has meaningful impact?

And that's where the value gets created or that's where where the real damage can be created as well. So the application layer, that's the real kind of interesting place. Yeah. And guess who's in the application layer. Forget I even this layer of AI and automation is exactly where, where what pega's sweet spot is with our Center-out architecture. So it fits really well. Um, this fits really well with Pega now. And like I said, there's a bit of an evolution and I want to show you a little bit of consistency of message here, because actually last year at PegaWorld, I showed you a similar picture and I talked about, uh, yeah. What is that particular evolution then?

But this year I can actually fill in the third part where I'm going to talk about agents with a lot more of real examples. Um, so let's start a little bit with that first phase. So engineered problems. Okay. What do I mean with that? Right. So now let's let's cycle back all the way back to Pega 2023. Um, and we were really early with getting on board with generative AI. And we built all these different use cases across application development, customer engagement, customer service, back office operations, you name it.

Yeah, really useful capabilities and features. And of course you can use the low code capabilities of Pega as a platform to build your own features as well. Uh, that said, technically, what's happening behind the scenes here is something really simple. It's something where we basically use these kind of dynamic prompt templates. So we see an example here. You know, this is a in a marketing application where we're generating a particular treatment to engage a customer. But what's behind it is a prom template like you see on the right hand side. Yeah. So in a way, it's elegant with, with a simple dynamic prompt like this kind of coded in your applications that you can create all these different types of things, like summarizing a customer service call or whatever it is.

But technically, as you can see, it's something that's quite, um, quite straightforward. Now, um, fast forward from 2023 to 2024. And then we were releasing some kind of glimpses of things where you think like, ah, oh, what's happening here? You know, GenAI is starting from this passage service into something that has a little bit more agency, just a, an inkling of of more agency. Yeah. And I call it basic tool use here. Yeah. Tools are these things that the tools that the agents can actually use. Now what is a good example of a basic tool in Pega types of capabilities.

Now, I think a really good example of that is Knowledge Buddy. Yeah, this is what in the industry is known as a retrieval augmented generative system. We like our titles. And the idea is that let's say you want to, um, well, you have a particular stack of documents that could be helpful in a particular domain. It could be all the backup product documentation. It could be your, you know, your company's self-service documentation that you want to, you know, give to your customers. Or it could be some internal documentation like Rabobank. They all their financial economic crime analysts, they have working procedures. You know, what do they need to do with certain accounts.

They, you know, they need to look into as part of KYC or onboarding, or maybe there was a transaction monitoring system that fired in an event they need to inspect those those systems. Now, if you have those stacks of documents that give you guidance, you know how it works, right? If you start to search, yeah, you get all kinds of hits. How do you know what the real answer is to your question? And that's where these rags come around. Because the trick essentially here is then to say, well, why don't we just take your question? We'll do a search in the corpus. We'll find the search hits of documents that are most relevant to your question. And then we ask GenAI to provide an answer to the original question based on the search results.

Yeah. So and that's incredibly powerful. You could even in the, in the Rabobank example, they spoke publicly about it. You'll hear more about that tomorrow as well. So I can talk about it. But but it's even being able to answer questions about what's the best working procedure in this in this instance, whereas the data has never been used to train these generative models. Yeah. Magic. Um, so a great example of of.

Yeah. Giving a bit more kind of, uh, um, making these, these GenAI surface a little bit more active, a bit more of an active tool, but it's still quite simple because you might ask yourself the question, why only give one tool when a corpus to GenAI and it's still fairly scripted how these tools need to be used. Why can't we just have agents where we have? Yeah, where we potentially can give them access to lots of different tools and let them figure out how to use these particular tools in a particular situation. Right. So, um, um, and that's the general idea here. But of course, then the AI can become more autonomous. But but yeah, you immediately get the flip side question is how do you keep them under control. Yeah.

How do you make sure that they're actually predictable. Yeah. That that's really, really important. Um, but but what but what what's kind of happening under the covers is to say, well, we have our GenAI models and we actually use them. We give them access to a whole bunch of tools, tools to sense information, to get customer information from here, get, get, you know, maybe ask a particular Knowledge Buddy a question, other types of data sources called APIs, but also tools that can actually take action, that can take real action beyond just reading information. And what's really important here, we we had a press release, um, earlier where we spoke about our kind of predictive agents, right. That these tools are generally predictable, such as workflows, business rules, automations, you name it, anything that exists, for example, in a Pega Platform application, because only then these agents can work in a reliable manner. And that's also what I meant with connecting, you know, A genetic eye to other forms of eye, or connecting it to other elements of automation. We'll see a lot of examples in a moment.

Um, and then also the case here, the idea of a case that's the, the, the binder that binds it all together. It can actually give a context of within which humans, workflows, agents operate and share information. So that's incredibly powerful to keep them grounded and constrained on, um, on a particular problem or situation. So we can use that creative power of the generative AI then to understand what the user wants to create a bit of a plan, GenAI will create a plan how to use these tools. It will start to execute the plan, but most importantly. So since the site, it will also understand like hey, what is the feedback so far? Did I do the right thing? Did I get the right result? What other tools do I need to use now?

Do I need to stop because oh, this is looking wrong or I don't know what to do, I need to escalate back to a user. Yeah, or maybe as an agent, I would decide that I want to take a particular action, but I need to ask permission to, to to do that particular action. So that's the general idea. Yeah. And we can implement all these elements of, of, of agents that you see here on this page. Okay. So let me bring this to life with uh, a couple of examples. And one of the, um, one of the first things that we actually built, like, I'll tell you a little secret. This agent service, we've had that in the platform for somewhere between a year and a half and two years.

We just we just didn't tell you. Yeah. And actually, we also had G8, a couple of features, like CDH, intelligent assistant or whatever it is that we're using that, that genetic infrastructure because we want to gain experience with which agents work, what what use cases work, what use cases don't work. And one of the very early applications that we built is Iris internal application. Iris is our intern. She lives on a wonderful island in the north of the Netherlands. That's her backstory. But essentially we can ask her any type of question like how can Pega I Pega agentic? I transform operation, service and customer engagement?

Keep it to two sentences for each area. Close it off with a sentence on the specific edge that Agentic I can bring, and you can see the answer here on the on the right hand side. And this is just one example, but we can ask her many, many different questions. Um, and by the way, we can also get insight into how she got to her conclusion. You see this document that if I send her an email, there's a little attachment there on top. And that's actually the this plan of how she got to the final answer. And she in principle the agent has over 20 different types of tools available that you see here on the right hand side. Yeah. So the team of Marco Loy has developed this.

And this allowed us to get a lot of hands on experience, experience with building these agents and what they can and cannot do. Yeah. And that's the way to learn that. Um, it also informed our vision of, um, how we want to approach, uh, I this, uh, by doing this hands on exercise. Um, and so we could develop that, that vision as well. And that vision kind of boils down to to the following is that this is just one example of a survey by Accenture that said that only a third of companies think that they can enable AI, or 32% even in the next three years. And why is that? They're on one side concerned, like, could we make the AI even powerful? It's cool that you have some intelligent AI, but where are the tools that that I should actually be able to use?

Do we need develop the all those tools from scratch? But even more important, they're they're concerned about governance and risk. So how can we control these agents making sure that they don't run amok. Yeah. So that's really the that was the starting point for our agentic strategy. Um, and led us to this statement where we said, what if you could combine the power of agents with actually the predictability of workflows? Yeah. And here I say workflows, but it also means business rules or any other types of automations you may have in your backup platform. Um, and that's, that's of course, that's something that could give you the reliability and the governance that and also give the power to the agents by giving them the right tools, but also the right level of governance.

Um, and, um, so that means that you can think about what it could mean for your business, for your operations and customer service to be instantly Conversation ready, or for your intelligent automations and operations to orchestrate both structured and agent driven work in a seamless fashion, and to assist employees with a form of always on AI assistance. And we do that then in many different ways, but to single out two different approaches. One is, uh, leveraging our agent experience interfaces to turn any workflow, any collection of cases, case types ultimately into what we call agent fuel. Yeah, because these workflows, they can actually be used. You can decide to make those workflows available to the to the agents. Yeah. And these workflows are very predictable. So that will give you overall a controlled predictable experience. Yeah.

Or for example call these agents from particular agent steps in our case Life cycle. Yeah. And that allows you then to, to to implement a whole range of different agents. Here are some examples. The design agents that we use ourselves with blueprints, conversation agents, automation agents, knowledge agents, and coaching agents. And I'm going to show you a couple of examples and that bring this to life a little bit more. Um, now let's say we saw, um, earlier today in the presentation, an expense application. Let's say we we kind of indicated some form of expense application on Pega Blueprint. Now the moment you take that blueprint and you import it into Pega Platform, you already get an out of the box agent and that out of the box agent will actually understand these different case types that you implemented and will give you an authentic way to interact with those particular case types of workflows.

So, um, we see that, uh, here. How can you help me? First question. The agents are actually listing out the different types of case types employee expense reports, credit, corporate credit card management, travel requests, vendor invoice processing. So what do you want to do? I want to file an expense. Okay. And now the agent. Actually it's not dreaming something up.

It will call this very structured workflow that we have, um for filing an expense. But we can engage with it in an authentic fashion. So yes, let me submit an expense report. It will have the context of who I am. And it has created this, this particular case. And it also tells me, ah, maybe you need to attach a receipt. So I grab my receipt as well and we can do some multimodal processing. Um, you can see here I went to a nice dinner with my colleague Arun, uh, in Amsterdam. And it will it has processed the receipt.

Now, uh, even all the different kind of line items from the receipt has been extracted as well. And we can see that when we dig into this particular case, as you can see here. Now another example which is a multi modal being Dutch, I thought like let me bring a Dutch example here. And this is a, you know, a car claim example. And we have in Europe we have kind of a standardized form. I apologize for my handwriting. It's really really poor. But the European accident form. So let's start up this this particular claim.

And we're going to feed it the form. We're also going to feed it this picture of my, my Tesla here. Um, and uh, to show, you know, that shows the actual damage to the car. So well, it will start to extract all that information. You can see here that we can see what was extracted so that we can double check it. Yeah. So I sometimes make errors. Right. So we need to have that ability.

Then it maps it all on that structured case that we already have. And now it will, um, we, we we pass the first phase of the claim and we took all the formal information, extracted it, and mapped it. Now let's have a look at that picture of the car. And it will actually also recognize hey this is a Tesla. So and the type of damage is a small dent. Uh, just uh, on on the right hand side, if I file the claim, it was the other way around. You know, I said like, in my form, like, oh, I got damaged damage on my Tesla. And there's a picture of, you know, my 25 year old Toyota. And then that could already have raised a red flag here.

And the agent could have highlighted that. Um, okay, some more examples. Uh, let's say in sales automation, we already introduced some coaches, uh, in 2024, release coaches are just a particular example potentially of of agents. Uh, and now we're going to introduce sales agents that. Yeah, built it out a lot. So here I'm a I'm a salesperson. I just log in and you can see it can actually create tasks, can create appointments. It can send an email. And obviously we don't necessarily want the agent to do that.

But just kick off these structured cases that we have. Or I can look at my entire portfolio and it will figure out, hey, what is the the opportunity that I really need to pay attention to at this moment in time? Or maybe I'm even the sales manager. You know which of my sales persons are not engaging? Well, I don't know if there's any Pega salespeople in the audience who get a little bit nervous now. Um, but, um, or we can look then here in a particular sales opportunity and say, hey, what what what do we need to do to progress that particular sales opportunity? So it's giving me all kinds of suggestions. And the last one is to create a quote. Yeah.

So and um, if I say yes, this is where I really want to have give that permission. It will actually generate that that particular quote, which again of course needs to be a structured then process. We don't want an agent to just dream up a random discount for this client with with their LLM fantasy. So so here you see really an interplay between agentic but very much built on cases, workflows and other elements that we already have in the system. Yeah. Um, so you might wonder, how is that configured then? Right. So here we see an example of of an agent rule. Uh, well, this is still a cultural in between 24 and 25 one.

But you can see here that we're can, you know, formulate a different prompts, a different forms of knowledge, different types of data sources that we make available and different types of actions that could be taken. And one of them is also to, to, to generate a quote, which then ultimately will just Execute a case type for generating that quote, because we want to have strict business rules that define, you know, what level of discount is associated with this client? Again, that's not something we want to leave to the LLM because then we get then we get into trouble. Yeah. Um, yeah. So these are then the the various elements of that agent rule. We can provide prompt instructions, guided questions that define a little bit the goal of the agent and the personality, how it interacts, different knowledge sources. You know, we can call out to all kinds of information APIs through data pages, but also call out to bodies and then take particular, uh, use particular case types or agents. Uh, also where the agent what the scope, what defines the scope of the agent and the various actions that are actually available.

And there's a one to many relationship to a variety of tools. And these tools, they can actually take action. Take action could be reading data, but could also be doing something, calling a workflow, starting a case, ultimately assessing running some business rules, ultimately assessing anything that exists in Pega through through automations. So virtually anything in Pega could become a tool. Yeah, but now you will understand why that's important. Yeah. You see another example here life insurance underwriting. So as part of that underwriting there's a medical review check. Do we need to do an additional medical review.

Would you ask ChatGPT. You know like oh, you know, do you think that this customer needs. No, no. You want to follow very strict guidelines and rules to decide, call out to business rules to decide whether a medical review is necessary and that medical review. Again, that should not be kind of some dreamed up agent to just go off and do something. No, that's that's a very structured process where particular people need to get involved, etc.. So how that's defined your risk agent. You're providing another layer of evaluation to ensure. Uh oh.

Wait a sec, I'm still here. Well, what we do here, the point I wanted to make is so it's actually calling out to the real medical review rules at that point in time, a set of business rules that, that, that are being used, um, and, um, and we see that here because we have the scope of the agents, the problems, etc.. But the more interesting bit is that we, we can see here that we're calling out to medical review rules, and that's a set of business rules that we defined in Pega. Yeah. Um, and then ultimately based on that, we can decide to create a medical review case. But again that's the case type that we want and we want a governed process to actually execute here. Um, or uh, another example, maybe a final example of calling out to kind of internal Pega capabilities. This is a real real estate underwriting example. So, um, yeah, we there's a client commercial client that has a real estate that they want to kind of insure.

Uh, so we're looking at this, but we're calling out here now to Process AI and more to the left brain eye to the to see like, oh, do we think that this would something if we go through the entire process, something we would likely underwrite yes or no? Yeah. What is the likelihood that we would close a sale on this. And you can see the likelihood is indicating now early on the likelihood is very low because there's particular risk associated with this, uh, with this building. And then based on that, of course, you can take a different course. You can ask additional questions that would mitigate the risks so that we can still underwrite this particular, uh, insurance. But the key point here is we're calling out to Process AI decisioning to, to get that particular input. We don't again, we don't leave that to the some LLM to hallucinate an answer to that. Um, okay.

And now so the question then is like, how can we then? Um, package is of course not the only, uh, company that's working on agents. How can we call out maybe to, um, other tools or other agents that we could use outside of Pega? Yeah, of course we have our regular API's, etc., but in the Agentic world, people are also working really hard on standards. Uh, one of them is MCP, uh, from anthropic. The other one is agent to agent from Google. And you see an MCP example here where we're using where we're calling out to a tool that's hosted on some external MCP server. Right. So in this case it's a simple technical POC.

Can we get information about the weather. Yeah. But it's just to prove the point that technically we're calling out to that external MCP server. And likewise, we can flip it around and we can expose Pega capabilities to the outside world through an NTP server as well now. And it's of course, crucial then also to to really understand when we're either designing these agents or when we're using them as end users at runtime, like, you know, what are these agents doing? You know, where what steps are they taking? Yeah, for us, this is not something new. Yeah, we were doing that with workflow. We're doing that with case management.

We're using that. We're doing that with human users, all the different steps that are being taken. By definition, we lock that so that it's auditable. Yeah. In the moment to understand where we are and how did we get to a particular state. But also if you get this nasty email or phone call three months later and you need to be able to go back and show exactly what happened and why and how. Um, and this is another example. Why, yeah, this, this whole ah, Pega environment. Almost.

Yeah. I don't want to say by accident, but from the start it was actually built to deal with those situations. To first have an environment where you could have indeed that structured workflow, then to case management, then maybe adding in some decisioning. But all the time we're thinking about, yeah, but how can you do that in a case context? And how can you make sure that every process step or decision execution actually gets locked so that it can be audited? Or how can we make sure that these elements in the application have the right level of access? And that turned out to be a perfect ecosystem, perfect environment to to deploy these agents safely in. Right. So but that instrumentation is is key.

Yeah. And then ultimately the multi-agent collaboration. So where you, um, um, uh, where you have multiple agents and maybe with different roles that working together to solve a particular problem. So you can see here in a new agent rule, we can also indicate additional agents. You can almost see it as agents that are being used as tools here by a main orchestrator agent. So we have a main orchestrator. There's a claim insurance claim that comes in. Um, and but that agent can call out to a claims inside agent and the adjudication agent, leakage fraud agents, you name it. Right.

So these different agents that can be recruited to solve part of the problem. And we saw another example of that earlier today when we got a demonstration from the 1 to 1 customer engagement Blueprint. But, uh, my final minutes, I want to really kind of hone in on the multi-agent aspect here. So with the 1 to 1 customer engagement Blueprint, we can ideate Next Best Action strategies, etc.. Um, for, uh, for, for 1 to 1 customer engagement and marketing. And what we're doing here, we're combining different agents had to initially at the start of it to, to really understand, um, what is the main problem that we're solving. How does it relate to current performance in the system? Can we, uh, generate interesting creatives, but can we also make sure that they're actually compliant? So there's different types of agents with different roles that are working together here in this.

So this is a telco example where maybe we upload, uh, we upload some supporting content, but we can also chat to the overall Blueprint agent. And based on the support and content and whatever we tell the overall Blueprint agent here, um, we're dragging in some more files as well. It's just like, okay, thanks for sharing all the information, everything you have. Yeah, let's let's investigate this a bit further. So the first agent that goes to work is a marketing analyst agent. And it's really going to look at the internet information that was provided, but also information, let's say within your own CDH system. This is, by the way, the agent that we still need to further develop, but looking into like, okay, what is the current situation? And then based on that current situation, uh, it creates a report. So the marketing analyst agent, we just see a summary here, but it wrote, let's say an entire report and we can feed that then to uh, to the other agents so we can get um, you can see there's some additional analysis they did here in Value Finder to find some internal context.

And now, uh, maybe the creative agents get to work. The brand compliance agent says, like, oh, yeah, I'll make sure that we apply our brand policies as well. Um, and then ultimately, the strategy agent kicks in, and that makes an overall analysis of the situation. And what is the key problem that we need to solve. And that may then kick off a further process where, uh, creative agents will start to work on those various creatives. But maybe the compliance agents then check whether the creatives that have been built are actually compliant with brand police on one side, but maybe your legal, uh, rules on the other side. Yeah. So a real example. You can see this in the tech pavilion as well, where these different agents are actually collaborating with each other.

Yeah. So ultimately what what they show is that that these agents are not coming out of nowhere. Yeah. There's a bit of a natural evolution from those passive services to some simple tools to real, authentic systems. Um, and if we look at the further development of this field, where is it going? Uh, I think, um, yeah, there's there will be a transition from single task agents to complex orchestration, as you can see here, from single agent execution to collaboration across agents, but also across humans. Right. So throughout it all into the mix, that's not not completely trivial. Like how do you communicate, how do you negotiate?

How do you understand each other? Um, grounding of those agents. You can see this is one of our key points that we want to ground these agents in your existing artifacts, business rules, processes, etc. and also ground them in reality. You know, feedback you get from from the outside world. Am I on the right path or not? And also have an element of learning because, uh, well, GenAI models, in principle they don't learn. So how do you how can you create an agentic system that that can ultimately learn and self-optimize where we are using not just agentic AI, but we combine it with other forms of AI and action to really get to that autonomous AI. So this is more research or technically where we see the field moving.

If you think about how do I apply this in my organization, then don't think in terms of these dichotomies, right. So that we're either, you know, agents are never going to work, they're always trainwrecks or there are silver bullets. Because if there's one thing I hope that you picked up from the presentation here, it's the truth is somewhere more in the middle. Yeah. Also, don't see it as two very different things. You have workflows on one side or agents on the other side. And yeah, either only use agents or only use workflows. No, it's the real interesting thing is when you start to merge and blend these approaches, for example, when you give agents very predictable tools like workflows to be able to, to use and execute. Yeah.

And that's the way to kind of move from these agentic science projects to real at scale deployments of agents. So hopefully next year at PegaWorld we will have many breakouts where people have deployed these agents at at scale. Yeah. And to wrap up, here's the Agentic goodie bag. We wrote a nice white power, a white paper on how to harness the power of AI agents with the predictability of workflows. We updated the AI manifesto with particular rules around agentic use. And one day a week. I'm also an assistant professor at Leiden University, where we wrote a very in-depth paper on what's happening in Agentic AI research. So all the QR codes of those white papers are here.

So if you're interested, just take a picture. Feel free to connect with me on LinkedIn. And with that, I'm at the end. I might have time, maybe for one question, and then we'll then I'll just stay around. But if there's a question, one question from the audience, let's take it right now. Yeah. You can. Here's a microphone speaking closely into the microphone, please. A little bit closer.

Yeah. When you're developing agents, do you think of personalities within those agents? So, you know, when I looked at those subagents, they're talking to each other. Do you build some sense of kind of the way that they behave with each other? Yeah. So the question is like when you would build a multi-agent system, do you take do you give these agents different roles, personalities, etc.? The answer is yes. Otherwise there's no reason to have a multi-agent system. They will have, like you saw with the brand, the compliance and the creative agent strategy agent.

They will have access to different sources of information or different actions they can take, and they are steered by different types of problems. Instructions. Okay, with that I'm at the end. Thank you very much for your attention. If you have any questions, just come in.

Recurso relacionado

Produto

Revolucionando o design de aplicativos

Otimize rapidamente o design do fluxo de trabalho com o poder do Pega GenAI Blueprint™. Defina sua visão e veja seu fluxo de trabalho ser criado instantaneamente.

Compartilhar esta página Share via X Share via LinkedIn Copying...