PegaWorld | 41:22
PegaWorld 2025: Agentic Customer Service isn't Magic: How to deliver the power of agentic automation safely and successfully
Customer service is poised to be transformed by a new wave agentic systems that aim to take on much of the work traditionally delivered by service staff. The promise is real, but the path to adoption can feel fuzzy or futuristic. In this session, we'll talk about real world, practical steps you should be taking today as you engage the next big wave of automation.
PegaWorld 2025: Agentic Customer Service Isnʼt Magic – How to Deliver the Power of Agentic
Thanks for showing up. Um, you can see our title is that Agentic customer service isn't Magic. You've probably heard a lot of things the last couple of days. You've heard Blueprint Blueprint Blueprint and Agentic Agentic Agentic. Right. This is not a session about Blueprint, and it's barely a session about agentic.
So welcome. You've probably seen lots of fantastic Agentic demos. If you haven't yet, please go to the Innovation Hub and the customer service booths. There is some amazing stuff out there if you haven't done the Customer Service simulator, go do that. Everybody saw Karim's demo. I hope I'll repeat what some other folks have said.
That was real. That was live. I was sitting next to the product manager who owns that, who was very nervous. Karim actually did pick up his phone and call into something they just made on Blueprint, and he talked to it. That was real. Go down to the demo booth and see that stuff. We are not here to pitch product.
We are not here to tell you the great new features. Lots of other people have done that. We are here to try to talk to you about how you frame thinking about adopting some of these new technologies, and why that maybe isn't as scary as you think it might be, or even more scary, or even more scary. I Rob Walker is an amazing speaker.
I always get slightly nervous at the end of his sessions. All right, who are we? I'm on a blank screen. Hey, We are a tight, well-run ship here. I'm Jeremy Kembel. I work on our strategy and go to market team for the customer service and sales automation products. Our CRM portfolio basically. I am near my fourth year at Pega.
Somewhere between not quite new, but not having been here forever. I will admit a bias up front, which is for most of my career. Before this, I am a customer service guy through and through. I've been on the product side, and I've owned a lot of the self-service and digital sides of the applications.
I straightforwardly am biased and say, I think the most exciting stuff that's happening in the Agentic world, broadly, across all applications and all technologies, is customer service automation for self-service. That's my bias. It'll come through. I don't really apologize for it, but just know that that's there, right.
So I think customer service self-service again the Kerim demo is where all the action is at. And I'm joined today by Brian, who I don't know if you know, But I have an amazing superhero, chin. This is why we like I and why we really trust it and shouldn't be scared. Because I could be him. That's amazing.
So I'm Brian Daly. I'm a senior manager on our business excellence team. What that really means is I focus on customer adoption of our new technologies, specifically to customer service and. The Agentic and AI products. What's interesting? Well, Jeremy's tenured at four years. I've actually been at Pega just over nine and come from a different background.
Where I was in the field, I was selling customer service was one of my special areas. Well, I don't love customer service as much as Jeremy. I felt the pain of customer service, and that's what we're going to talk about today as we get into OK Agent Trainer Agentic Agentic. But how does it apply to the pain? How do you actually start addressing that and take a step back and look at it? And just as a random aside, I think Brian won the AI generated image contest.
If you guys don't do this for yourselves, it's fun. I'm not. I'm apparently a lot older than I than I thought I was. And and if you're presenting, you have to have a hand up. Yeah. That was that was actually the takeaway. Every image comes back like this. So apparently all of us when we do this, this is all the training data is handset like this.
All right. This is both a real pitch and me having to bring a little bit of like, what was Rob's line this morning. If you think it's overhyped you're right. But if you think it's overhyped, you probably should rethink your thinking as well, right? This stuff is real. Again, that Kerim demo was real.
Um, I've been in this space a long time. I did a lot of, like, original chatbot stuff in my previous jobs. Right. I'm very familiar with where the complexity of where this stuff has come from, the power of the LMS to front a bunch of customer service interactions in very natural ways with very little work is legitimately powerful, right? So I hopefully don't need to convince people that these technologies are legitimately going to influence and change some of the way customer service organizations are run.
I'm curious, like, are most people here? CS ops people, customer service folks? Is that a fair assumption? Right? I joined some of our client Qbrs and some of the executive Briefing center meetings and stuff. I'm amazed at the way our customer service organizations represent the pressures that they are under from their C-suite, from their board, to cut costs.
Right. I hear some pretty remarkable stuff. Our board thinks we're going to lay off 90% of our contact center agents in the next three years. Right. I don't think that's going to happen, but they're feeling that pressure. And I'm, I suspect to whatever degree or however hot that dial is, you're all feeling a lot of those pressures as well.
From my point of view, a lot of those pressures are directionally correct, and they probably feel slightly overstated and maybe a little bit vague. Go save a bunch of money with AI. And that's the level of detail. And so we're going to try to help you figure out how to make something productive out of that pressure.
How to respond to that? How to think through it. Does that ring true to folks? Does that are people reacting to that kind of stuff? All right. So we will spend a few minutes talking about how we define this stuff. Yeah. Go, Brian. And if you're feeling that pressure and while that might be at a leadership level or all the way down to a call center agent, right.
Everyone is hearing this and we've heard it for 48 hours now. Agentic agentic agentic Agentic. So what does that actually mean? And if somebody is asking for Agentic, do they know what Agentic means? Are you meeting them at their level of conversation? And so a lot of this is understanding, setting expectations of what the technology is, which can be kind of ambiguous as an DSLs.
Rag MCP, KFC, IHOP. You never know. So knowing that there's kind of this ambiguity and very broad umbrella Brella that is agentic I starting to understand where it comes from, gives you the power to have the discussion of okay, what are we actually using? So how do we go so quickly from this fun ChatGPT to an autonomous army of bots coming at us? Also some alarming imagery and some of the keynotes this morning.
Right. Um, we didn't know that Rob was going to do the left brain, right brain thing this morning. Um, but we have a slightly different take on it, so we'll go with it. Um, the take here I want is a little bit of a historical one. So I did some of these sessions last year, in the previous year. And, you know, Alan said this in the opening keynote as well, which is there are a variety of AI technologies that need to be applied for purpose.
Right? There are things that are very good at certain certain solutions and some things that are sort of a misfit, and that's okay. And we need all of those things to sort of deliver autonomy, deliver automation to deliver value. Right. We spent a lot of time last year, in the year before talking about the distinction between sort of traditional predictive, sort of analytical AI tools and these new creative tools.
As Rob talked about the left brain, right brain stuff. And I think those distinctions are really valuable. I also think for those of us sort of in the trenches on the CS side, they can feel a little academic. And I think the agentic wave, whatever that really means, actually starts to blur this stuff again.
Right? That the distinctions matter as you're solving very specific problems, as you're solving one step in one of your customer service journeys, you need to apply the right technology and the right AI. But the whole arc of sort of agentic promise is that you're deploying these things at the right time, sort of in a coherent pattern and a coherent plan.
Right? So I want to take the point of view that this stuff is interesting. It matters as you get into the implementation of the technology stories. I don't care on the academic definition of which parts of these need to be part of Agentic. I think Agentic is just a great, great way to talk about things that are capable of like, you know, delivering coherent like connected work to get something done.
I'm glad you're talking coherent because it's Tuesday, baby. Yeah, there we go. So if that definition is ambiguous and everybody's using AI, why not ask AI what Agentic is? And we look at this and I know now we've seen tons of examples of how AI works well or how AI doesn't work well. And you get crazy octopuses and robots shooting lasers.
But I think this is a pretty good job of just asking copilot, what is Agentic AI? And an AI agent is a software entity that performs tasks autonomously. Great. I think we all know that. And so what if we prompt a little further and say, just give us a little bit more detail? So again, this is just copilot's definition of itself in a sense.
And the first bit is generic, not wrong. Second bits may be a little better, right? What is it doing? It's perceiving its environment. It's aware of context. It's making decisions. That's really interesting. How is it making decisions based on what inputs and in what boundaries it's taking actions? I think that's probably a given.
That's one of the big changes from like the ChatGPT just just play around world and it's aligned to achieving an outcome. Right. We'll put on our Pega hats for a minute and say workflows are designed to achieve outcomes, right? This is the stitching them together. So I think this is actually excuse me, this is actually not a bad definition.
I also think if you unpack that a tiny bit, a lot of us are working in software projects right now to bring optimization to some of our customer service journeys or flows, things like recognizing context, making a decision, or routing a unit of work, taking some kind of action. Achieving a goal that doesn't feel new. Exactly right. Those are the things all of our work usually involves when you're implementing a new software project. Right. So when you start to break this down, hopefully it starts to feel a little bit less threatening. The specific technologies you might use at each of these points might be in a little bit of flux, but what are we actually trying to do? We're trying to do all the same stuff we've always been trying to do.
All right. So that was the sort of theoretical answer of what an agent might be. I for my own definition, and this is sort of my personal take. For what it's worth, I think of them in three ways. And I think this has been relatively reflected throughout the course of the last couple of days as well.
And some of the top level keynotes, which is always good when I align with that stuff. The first is agents as actors in a sort of a more atomic sense, right? This is the take action thing. And this makes this I think again this intuitively we all get. There are examples of document ingestion agents.
Right. That might have been called OCR. A while ago. And maybe it's got some new technology in it as well. We've got email generation agents. We talked a bunch about Iris, which we use all internally. Right. These are all little atomic things in previous generations. For those of you who spent time in like the NLP or chatbot world, we've called these things skills, right.
You would do these little atomic units of work, and I think we all get that. We are going to be offloading little atomic units of work to these capable but relatively narrowly focused automation tools. And the visual here, you can see we put them inside of a coherent workflow. And over time, if you know what you're doing end to end, you can sort of selectively automate each of these things as you develop the automation tools to do that.
So I think one of the new things, or one of the maybe rebranded things that has come in in the last year and a half is agents can take action in very narrowly defined ways. I think for those of us who've been paying attention, particularly on like the way that the LLM models are positioning themselves these days, reasoning engines, this is the big change in the last 18 months, right? Instead of I'm going to send you a prompt, I'm going to get one response.
And that's the end of our interaction. It's like, I'm going to send you a problem and you're going to send me a plan. You're going to go through and assemble multiple potential steps. So again, if we slightly Pega size this again, you could imagine trusting an agent enough to do a couple of things a bigger boundary.
I need to go through a review process. Here's some guidelines I trust you to assemble. What that review process is going to look like, given some constraints that might take you 4 or 5 individual steps. It might take you 4 or 5 Individual other agents that collaborate together to get that done, but agents who are more capable than just a single response in the traditional ChatGPT sense.
And then lastly, and I think this is a bit of the like, if we go back two and a half years, the initial ChatGPT magic for all of us is agents as conversationalists. You can talk to these things and they mostly talk back, sometimes in a nice way. Sometimes in a mean. Way, sometimes in a who knows way.
Right? And this is really, really interesting to us as people who communicate with language. Right. These things feel sentient. They feel really sort of amazing and capable. But this is actually really powerful. And Alan talked about this too. And when we think about the product strategy and the things we're doing with this is the separation of concerns between the language management and conversational side of things.
Don called it a conversation agent this morning, and then a set of technologies that might sit behind that, that the That the conversational agent is capable of interrogating and working with to do more structured work. That separation is actually really interesting, and if we Pega size it again, what that means for us, and I'm going to put on my self-service bias again for a second.
If you've worked with Pega or some other system and you've defined your customer service processes, those become essentially part of the data set that you can feed into an agent. That can also take account of other agents that might have a certain set of skills or actions. It can take some knowledge, some policy guidance, some other things, and you put that all in essentially behind a conversational interface with some reasoning to help recognize the intent of what the person is trying to accomplish and then invoke the appropriate actor, all sort of driven by an understanding of what the process is in every stage.
And step to get that done is That's what Kerim demoed. That's what you've been seeing in the demos the last couple of days. Again, it's real. And what's so exciting from a CS perspective is you can now expose that to your end customers, and they can interface with that pretty naturally. And you didn't have to go design oodles of dialogs and do all the NLP testing because the LMS handled that for you.
Now let's get away from your customer service self-service bias and really talk about actually, can we go back a slide because I do want to Don mentioned three letters on the stage this morning called an MCP. And this is my slight nerd coming out, if you know what that is. It's the way an agent can actually use tools and resources in the background.
And if you think about it, looking at that graphic, that kind of looks like an MCP, but a human driving it. So as you start thinking about where these agents apply, you can't leave out the human. We've heard that message over and over. It's how do we empower the humans? How do we make them make their jobs easier? How do we make them less error prone? And just understanding that base that it's really built around the human.
And Jeremy started with the customers, and knowing that we can go to the knowing that we can work to resolve those issues, it's a better experience. We are all customers ourselves, so knowing that we can go and accomplish what we need to easily. No hassle not repeating myself, making life good, but as a business owner, as somebody driving technology, don't forget about your employees.
We're empowering them. There is this fear of AI that's across the board, and you've heard the value prop of right, we're getting them more efficient. We're getting them better prepared. But I heard an interesting point in the US bank breakout yesterday where the employee felt more appreciated by the company because they were being invested in.
And it wasn't about, uh, cutting them out. It was We're securing your job because now you're better at it. And it made their life easier. And that was just an interesting perspective where you talk about the humans and the work they're doing, but also don't forget about their human experience. And then the last part, how do those humans interact with the workflow? And that's as we get to that MCP model or agentic the real power.
And so actually, Jeremy, I'm going to hand to you. I'll just I'm going to sidebar for just a second. And again I'm putting on the old chatbot hat. Um, do people believe that this stuff is going to really work? Like as a conversational front end to stuff like, are people bought into that at this point? Because I'm an inherent skeptic, but even I'm kind of coming along to that.
That was like a solid for people. Yeah. Stay strong. Soon in five years. Okay. That's that's enough. There's there's some buy in. All right. So every Gen I or I session has to have a joke about AI gone bad. And so we're going to do that now too. Uh, this happened uh, internally we were working on playing around with some of the rag tools.
People know what retrieval augmented generation is. It's the like, use your knowledge articles and sort of use that within the prompt. And then the LLM says, given this data set, I'm going to provide the best answer to the question. So it's a mix of a little bit of the unpredictability of an LLM, but it's mostly grounded in your content.
So we are grabbing some content from a public sector entity trying to provide recommendations to newly married couples. What do you need to do next? You need to go register with the city. You need to change your property records. You need to maybe look at your medical insurance, whatever the sort of recommendations might be.
So we loaded that data set in, and then we went and asked, okay, hey, I'm a I'm a newlywed. I'm not. But 21 years, by the way, this year I'm a newlywed. What to do? That's my big applause line. Um, what to do after getting married? This actually happened. What do you guys think it said? Not wrong. I mean wrong, really wrong, but not hallucinated, right? Rob mentioned this in his keynote, which is these are tools that like make associations and they can be unpredictable in the associations they're making.
Right? This is a thing that happens after people get married, as is death or all sorts of other things. And for whatever reason, I don't remember which LLM was in operation at the end of this one. But like for whatever reason, this is what came back as the recommendation. This is mostly a throwaway gag, but it's, you know, governance matters.
These things can still be pretty unpredictable, even when you use some of the best practices, grounding techniques that back up a lot of these systems that back up our own systems that we offer. At Pega, you need to be thoughtful about some of the downsides of this stuff. Well, and this is also a good point, too, because we've heard you hear the hallucinations, you hear the fear, you hear of how you have to wrangle ein.
But this is a great example of the opposite. It didn't have enough context to say, oh, these are people who can do other things. So from a sequential thing, after you get married, you get divorced or die. And so as you start considering where you're using this and how you start tuning these prompts, how you start leveraging, tying it into your workflow, it's not only let's make sure we're safe and not getting away and stretching and causing more errors or those hallucinations.
It's does the bot. Does the agent actually have enough information to do what we want it to do? Are we putting it into a position to fail? And so that leads right into our next session. And why you're probably here is how do I get started. We talked about this not being a Pega product pitch. AI isn't magic.
How many people here have implemented technology before? For good, right? That's why we're here. It's still technology. It's a new way. It's a different way. It's kind of like learning a new language. But there's still basic structure that you have to go through to make sure your project's going to be successful.
Now these projects are moving much faster, so you have to be better prepared and or ready to be flexible to adapt what's coming in. And so we've got five quick steps. I know everyone loves the list. We're going to rip right through. And you're going to learn exactly how to do AI. But step one is assessing that readiness.
So you go to build a house you need to know is the land there? This doesn't have to be a six month project, a year project. This can be days if you have access or even minutes if you have access to the right resources. The basic understanding of you know, is your technology able to adopt the AI. If you have a process that's broken previously because an API doesn't work well, an agent isn't going to go in and fix it.
And so as you step back and you start looking at what we're doing, that's the most important question. What are we doing? Why are we doing it. Do you have a business objective? Do you have baseline metrics before you're chasing new metrics? And is your reason to use Agentic? Now we're back to that ambiguous definition.
Why are you actually doing this? Once you understand that, why you validated that that technology is ready to use it. Do you have people that are experienced to do this, going to learning a new language? It could be a verbal language or a coding language. Yeah. I mean, the COBOL example that Kerim was doing, right.
You wouldn't you can't hire someone to change one of your COBOL programs if they don't know COBOL, right? Like you need some basic skill sets here. I'm curious again as like back to the pressures question. Are there pressures that you're feeling so vague that people are having a hard time even understanding what the business goals are or what to do first? Like is it feels sort of overly simple sometimes to say, understand what you're trying to do.
But I find a lot of these conversations people come in with, you know, the goal is use AI, and that's the end of the goal. And I think, you know, we really need to unpack that. We really need to unpack that. We wouldn't do that with any other software I don't think. Right. The goal is use expense management.
Right? I think the goal is something other than that. And so I think unpacking and really understanding and challenging back and breaking down into the wins that you could get early or often is really important. So I know it sounds ridiculously simple. Um, but again, I've been in a lot of those conversations that are not yet there.
They're simply driven by this sort of theoretical idea that AI is going to change the world, and it may do so, but it's going to do so in steps, and hopefully it's going to do so with business purpose. So really, really take the time to do this work upfront. And so you've got your business objective.
You've figured out that this could probably work. We've vetted it a bit. So now we're getting into developing that plan. And again, it's just technology. It's going to be faster. It's going to do some amazing things. You have to control it. Be ready for a phased implementation. We heard that awesome story from Vodafone in under 40 hours.
Well, what comes after that 40 hours. And what they're amazing at is the ability to keep iterating and making that better. You have to change the mindset of are you actually agile? Can you show up and say, we do have a six month goal or a year goal, but we have deliverables at week one. I mean, with AI you're talking hours sometimes and being able to understand, okay, we have this long term vision.
What are the small steps to get there. And that's really how you're going to de-risk the project. I mean the question of are you agile? Who here's agile. Who who's. Told like really. Who's told they're agile. Who's tech team is agile, right? And it's just this it's an easy concept to say or be told, yes, we're agile, we're moving quick.
But at the end of the day, there's change management, there's call centers, there's customer service. These are long projects, whether it's because of the technology or the people. So setting the expectations of those shorter cycles and showing the progress through that is my experience in the projects we have with AI.
The best way to articulate that quick value. It is also, and I'm curious on people's experiences here, but it is also the best way to de-risk a bunch of what can feel like sort of amorphous risk objections, right? A lot of AI stuff. There's a lot of different AI technologies in play. A lot of people react differently to the supposed risk profile of those, and it can be very hard to get started on something big because there's a lot of checkboxes, there's a lot of reviews, there's a lot of objections.
Sometimes those objections are crisp and clear, and a lot of times they're kind of just nervousness. I'm broadly a huge agile proponent, but I think there might be something very specifically aligned to the need to both prove value quickly. A lot of this stuff is in this interesting tension between a commitment or an experience experiment.
You need to understand that up front. Are we doing this, doing this, or is this a research project? And that's fine. But just know that and then use the the possibility of rapid cycling to convince people to come on board and to de-risk the projects. Right. So I would say people were surprisingly not enthusiastic about being agile yet.
Um, I'm going to make an agile pitch, which is use the specifics of the risk profile of the AI projects as a way to adopt some of these principles internally. We're actually doing some of that ourselves, right? The LLM stuff is changing so fast that if you're not adopting the change quickly and adapting to it quickly, a two year project here almost doesn't have meaning, right? So there are characteristics of these projects that really lean themselves into an almost a requirement to be iterative in these cycles, to de-risk them, to do them in small pieces and then learn quickly because this stuff's changing really quickly.
And as we progress from step two to step three, this question becomes very important, because the next step you'll see when we build is building a foundation, understanding the commitment versus experiment. When you now shift to something actually happening, it is fine to go through and do that experiment and see if it's a fit right.
That that is part of de-risking. Making sure that doesn't turn into shadow it on an unsupported project is I've we've seen it actually several times now where it started as a great proof of technology. And then they took that proof of technology and started iterating right away without this concept of building a strong foundation, taking what was that phase plan, and actually building the structure around it to make sure you have the proper governance, to make sure that you're able to integrate those systems both with data and the applications that are running, to make sure that people are empowered. And the theme through all of this, going back to basic technology and not magic, it's people, process and data. As you start setting this up, it's not rocket science. It's not magic. Make sure you have your plan, make sure you understand where you're going, and then go through the basic steps one after the other.
The people part is a very important thing though. Um, we heard about I mentioned it right. Empowering the person or your agent? Your employee? Sorry. It's a representative now. We can't say agent anymore. Uh, but there's the pieces of leveraging AI we sometimes forget like some people don't understand how prompts work.
They understand how Google searches work, or they might understand how to navigate their current knowledge management system. And so they have their little shortcut searches they know to get to articles. We had one implementation, uh, where the agent, uh, we were getting terrible feedback in our user acceptance acceptance testing because the agent never had an answer.
And one of the searches she was using was credit card table. Yep. Because she and the way their attribution worked and their km, it got her to one of her little articles she needed perfectly. She figured it out five years ago and has done that search every time to get to the article she needed to send.
And then we got terrible reviews because the I couldn't get her to that table searching credit card table. Things to think about. It's not just the training and tuning of the prompt and the AI and how it functions. Make sure that people are basic. Have a basic understanding of what it is doing so they can use it appropriately.
I know you're already called back to the US bank session, but it was a really good session. I hope if you weren't in it like watch it. Um, and that was just another, another point that that she made, which was, hey, we we started rolling some of this stuff out and it was really good. And then it made some people nervous, and we actually paused it for a little bit.
Right. And we went and we heard their concerns and we took that into account. We did a little tuning and now we're scaling it much more successfully. Right. And it's really easy as sort of a bunch of technology centered folks to be really focused on the tech side. Um, but that's not the only thing that matters, right? A little sidebar story.
And again, this is a secret pitch for the customer service simulator down in the booth. Um, when we built that, we first stood it up two years ago, year and a half, and it was explicitly a hackathon project. It just came out of some great developer's head. Right? It wasn't a strategy. It wasn't a pitch.
And we knew it at the time. It was an experiment, and we nursed it as an experiment for a while. And 25 is actually the major commitment release where we leaned way in and we rebuilt the whole thing from scratch because we now think it's awesome. But that distinction between, yeah, it was a cool experiment.
Yeah. We weren't sure how we were going to use some of those LLM backends yet. We'll sidebar it for a little bit. And then we nursed it along and we made that commitment six months ago, eight months ago to really develop it again. Secret pitch. If you haven't done it yet, please go down to the Innovation Hub and do it. All right. Next up we've built our foundation so things are plugged in. Things are integrated technically turned on. At this point you're starting to use it. Now it's time to get into iterating and tuning. How many times have you heard the word tuning in the past few days? Does anyone know what tuning is? I'm a musician.
I tune my guitar all the time. It does not help my wife's happiness, I can tell you that. So when you talk about tuning, there's a couple distinctions we need to make in this iteration is you hear about LMS constantly and tuning the model. The majority of us in this room, I would say all of us are never going to tune a model that is somebody else.
Really smart people in a room doing a thing and you go, you leverage them. Great. The tuning or what we could call optimization to make it a little bit more comfortable. Not magic is around your prompt. It's around your data. It's going back to this rapid feedback cycle cycle in four short phases. We're down to projects that we're doing in weeks and days.
And what's amazing is in those projects we will have performance issues. It's an assumed we're not going to chunk that data right, right out of the gate. The catch is to get started, you have to know, especially in your first couple of iterations, first couple of use cases, things are going to be wrong.
You have to learn how to work with the model. Everybody's data is different, The model is great and you have to learn how they can work together to accomplish the goals that you're getting to. It goes back to that balance between giving it too much free will or too little information. So you're getting the wrong answer as you start going through those iterations.
Chunking that you'll do with your data team. Hopefully, unless you're enabled. We have any data people here. No, it's a CS. Some of us nerd out, but so when your data's right, that's one piece. When your prompt is right, that's the other. And as you're in agentic processes, there's a chance you might not care about data at all.
If you're going to do work and you're just taking action. Those are very structured data points, a template you're not don't even have to worry about it. You may never hear of a vector database in your life. So making sure that okay, we're now getting into tuning. If you're optimizing those prompts, understanding the subtleties of how in a model or an agent will walk through what you're telling it.
Those instructions. If you have a math based brain, you'll be okay. I don't know if you've seen the videos online with the you know, it's YouTube and there's a dad who's an engineer and says, okay, kids, write me directions on how to butter this bread or put peanut butter on the bread. And they do great things of, oh, right, open the peanut butter.
Um, scoop. Put the knife in the peanut butter, put the peanut butter on the bread and then you're done. And so the dad takes the knife, he opens the peanut butter, sticks it in, and then takes the jar and puts it on the bread. And that's what we're dealing with. While AI is incredibly powerful. It's also very simplistic.
You have to think baseline steps, and that's just a muscle memory you'll learn as you go through it. And not being afraid to make those mistakes and really just get started. And then as you take that right, the tuning of how the data works, how the prompts work, you get into how are you aligning it with your workflows.
And this is again came out in the keynotes a bunch today. I was actually just in the Forrester session in the previous hour, and his primary definition of agentic was that it basically ties to workflow, right? The difference is it's multi-step, but to the right tool for the job question. Sometimes you're tuning a prompt and sometimes you're just fixing the step.
And you'll sort of start to figure that out. We'll all start to figure that out. Right. But in the early days, especially in regulated spaces where most of our customers live, you're probably fixing a lot of steps or just updating a bunch of steps or handling the exception in a workflow, right? You're tuning some prompts for certain sorts of things, but again, you're probably not yet at the throw the whole problem at an LLM to do multi-step reasoning over.
You're probably backing most of that up. We would certainly recommend you're backing most of that up. So there's a tuning tuning scenario. But again, to Brian's point tuning more broadly, right, I play the piano. It's a much less frequently tuned instrument. I don't do it every every time I sit down and play.
But pay attention to the backing workflows that we hope you're using behind your agents. Blueprint them. Iterate them. Iterate them in Blueprint get together as a design team. Keep working them in Blueprint right? That's where your consistent changes will take place to the demo. This morning, the musical score demo, right? I thought that was a great drumming demo.
Right. You need you need to have the core sort of guide. You need the core workflow behind the stuff if it's ever going to be consistent. Otherwise, testing and tuning has a sort of a dead end. And then the last step, which we all love is the scale and innovation. And this is wide open. There needs to be good governance.
Make sure you're understanding the one journey at a time. Right. Sprint for print. Print for Sprint which what's the order on that one? Who's got the tattoo? No sprints without a Blueprint. That's what it was. There it is. Tattoo last night. Yeah. Me and Alan. It's a little sore. Yeah. Um. But also understanding, like that one journey of the time, that's not an actual literal single journey.
That's just making sure you're taking the proper steps every time. Every time you go to iterate or scale or start that new project. Don't just assume everything's ready. Go through the quick assessment. Go through developing that plan. Make sure your foundation is set. Make sure you're actually getting into that iterative phase before going to your next one.
Resources now become a very interesting thing if you're able to use AI so fast, so easy, anybody can do it. Well, you have to make sure you don't overextend that there is the control and the governance around the projects that are expanding, and you're staying by leveraging those workflows within the realms of what you need to accomplish.
Um, and I think this is your line, but I love it. Just success begets success. The iteration thing. Right? Prove value. There's there's real value in this stuff that can be very low hanging, right? It doesn't all have to be giant, multi-step agentic. There's a lot of automations that can be done with little atomic actors across your workflows, right? Get buy in, get people to sort of tolerate the risk.
Get people to sign off. Get people to invest. Right. Nothing here is like there's nothing about that. That's AI specific. This is just a generic thing, right? Which is kind of the whole point, right? Look at look at what we just walked through. Right. Yes. There's flavors of AI that are interesting.
There's flavors of risk that are interesting and distinct here, but that there's nothing on here that says this is an AI project, right? This is just what you hopefully do when you look for investment, when you look for outcomes, when you look for success in implementing something. Again, those of you, not very many people were like on the agile side.
Yet broadly speaking, though, this is what we do. This is how we implement projects. AI isn't different really, in a structural sense. And if people are running around with their heads chopped off because they think it's wildly different, we all need to take a breath and say it's technology. It's software.
We've all been through a lot of cycles of software adoption before. It is revolutionary and it will drive real change. It is also software. I love how you said running around with their head cut off, or you have the fear of having your head cut off because you're supposed to be using AI. That's your quick checklist.
This is your leverage to very easily, without much knowledge on AI, to say, okay, yeah, absolutely. We'll adopt Agentic, but we need to go through these first couple steps to make sure we can do it. And it's your safeguard to de-risk and make sure this this is real and not just magic. All right. Are you ready? Everybody's ready.
You just need to plan for it, right? It's not magic. It feels magic because it can talk to us. Projects aren't magic. They're the kind of projects we all do. They need to be accelerated and iterated, which again, you should be doing anyway, but I will probably force it. We didn't hit enough of the high value stuff probably in this session as we could have, because we weren't really trying to pitch product.
But again, go down to the Innovation Hub. If you've done some material thinking here, or had to react to some pressures and start developing plans here, hopefully it's clear. Again, I'll put myself self-service hat on. There's really obvious dollar signs here, right? But it's not necessarily like way in the distance.
This stuff is here today. And as Brian was saying, just run it like a project, right? Do we all feel better? Just run it like a project. Projects are easy. Just run it like a project. That's all we do. That was easy. All right. We've got actually not very many minutes. We'll take questions if you have them.
I think there's mikes or any other jeering or applause or if. Anybody has a great song. Yeah..
Risorsa correlata
Prodotto
La progettazione delle app, rivoluzionataOttimizza la progettazione dei flussi di lavoro, in modo rapido, con la potenza di Pega GenAI Blueprint™. Imposta la tua visione e assisti alle creazione istantanea del tuo flusso di lavoro.