Skip to main content

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 40:55

PegaWorld 2025: Contextualizing Customer Experience: Bringing together the power of CDH and LLM's

Discover how organizations are transforming marketing by integrating Pega's Customer Decision Hub (CDH) and evolving intelligence of large language models (LLMs). Our thought leaders will delve into the roles of CDH and GenAI to deliver highly contextualized intelligent customer experiences, providing significant value to clients. Through strategic insights and case studies, you'll gain a deeper understanding of the rapidly evolving AI landscape. Join us to be part of the future of marketing.

PegaWorld 2025: Contextualizing Customer Experience – Bringing Together the Power of CDH

We are ready to start. So first of all, welcome. Um, really excited about this session. Uh, and those of you who are here today, you're really going to have a treat. Um, because what we are going to be talking about is how you contextualize the customer experience in, in this new world and this new world of, of agentic in this new world of large language models, um, and existing AI that's out there as well.

Uh, we're going to talk about how you use agentic, how you integrate, when where does it make sense to use Pega CDH where does it fit? Where does it not fit within this broader structure? And how are companies executing? And with me today I have Giles Richardson with Wells Fargo. Giles is the EVP and he's head of personalization at Wells Fargo.

Wells Fargo serves over 70 million customers across all channels, and he's got tremendous amount of expertise, uh, implementing CDH within within Wells Fargo, as well as implementing AI and Gen AI. So he'll bring that perspective. Um, and also next to me, I have, uh, Ren Zhang. And Ren is a managing director for advanced AI at Accenture.

She also leads up AI at Accenture for financial services. And prior to coming to Accenture, Ren was the director of data science and really led the deep learning and personalization over at Amazon. Um, so this is just going to be a really interesting session, as you can tell with the the folks that I have next to me.

So I'm going to I'm going to jump in and not do a lot of talking here and actually let let Wren and Giles do the talking. But I'll start off Giles, maybe. I'm not sure if everyone here knows. I know you've presented before at PegaWorld, but, uh, maybe you could talk a little bit about just for folks who haven't heard it, just the the context and the story of what we're doing over at what you're doing over at Wells Fargo.

Morning, everyone. So I hope you enjoyed the session. So far, we've, um, we've been working with with Pega for about five years. Um, we implemented CDH. We wanted to achieve next best conversation for our customers. And it was it was the, the thing we probably wasn't the only thing we needed to do, but it was the thing we we wanted to start first because it was it was going to be so complex.

So we we were looking to modernize our whole martech stack. We wanted to be able to develop audiences simply. And for every form of marketing, we want to be able to activate those audiences, whether it be a message to a customer or a digital experience or a piece of paper. And we want to be able to measure the whole thing.

So we had a vision of where we wanted to go and this, this sort of key central plank of can we put a message in front of a customer that's relevant and timely was, you know, was crucial to the overall success of the thing. So we got in early with Pega. We spent a little bit of time looking around to see who would be best fitting our needs on that.

And we've we've had a good and fruitful relationship with them. So it's at the very heart of everything we do. Yeah. Fantastic. And and maybe it's worth jumping in a bit about what? Uh, let's get into the topic of as we get into personalization. And you know, Giles, as you said, you were implementing Pega and the solution around it, but what do you see? What's happening in the world around marketing and personalization right now? And how is AI and AI and Agentic Energetic impacting that? Yeah, sure. Good morning everyone. Really great to be here and to be on stage talking about this topic. Um, so if we're thinking about AI in marketing, it's not really new by itself.

Uh, digitalization of the marketing has been going on for decades, right? If I talk to my clients lots of times, they are saying we have been digitalized bank, um, God knows for how long. Um, so how have we been doing so far with all those digitalization journey? Uh, we started digitalization, wanting to have personalized experience and want to bring the scale and have the efficiency.

But after the decades of transformation, customer telling us, basically our research shows that they find many of the interaction they have with the bank or with any of the firm become increasingly transactional. They don't feel there's really the emotional connection they used to have before digitalization.

When they have the mom and pop shop, they go to the branch with their personal banker, all those kind of things. They find many of those kind of digital transactions we have with the customer. They are functionally correct, but probably emotionally void. So that's what we try to change. And ways GenAI and Agentic.

We actually now have an opportunity to restore that connection. Emotional connection used to have. So this is what unique about GenAI and authentic GenAI. They are able to mimic that human and intuitive interaction the customer used to have with a personal banker with their personal, you know, store or shopper and genetic.

What I can do is bring in that autonomous execution. Before we wouldn't be able to with traditional AI. So now with traditional AI still there, continue providing those pattern recognition and be able to provide a reliable outcome. Operational efficiency. Adding GenAI on the top. Bring that emotional connection and agentic not automate.

We now have a system that can be always on, can anticipate customer intention, and we will be able to also give effortless execution for the customer. So that's something customer has been asking for all the time. I would say customer ask has never changed. They just want the same thing. They always think, I want you to remember me personalized.

I can trust you are doing my best interest and effortless. Um, now we can deliver that so all those become possible. So it's really fascinating where we are. We haven't achieved that yet. Yes, that's what I'm saying. Um, we haven't got there yet, but now we have the tool that truly you can make it possible.

I love the way you say traditional AI. It's been around five minutes and it's still now traditional. Is that so? We were we had the opportunity to to meet in person and talk about this yesterday. And we largely squandered our time because we talked about, as both small dog owners, how useless they were in saving you from a potential bear attack.

Well, they are cute. I will give them their credit. An attack by a bear. Don't have a small dog. So. But we did actually talk about I a little bit. And and I think that's when you are you sort of set out with a question like how does it affect CDH and something like that. It like it doesn't replace it.

It helps it deliver on its promise a little bit. So we've the tech itself could always run like we looked at what we wanted to do with personalization and say, right. So we're we're going to move towards a more journey orientated thing so we can use the messages and the experiences we put in front of customers to either initiate something or if they're already doing it, to help advance it.

So we thought we can do that. And then we got into the nuts and bolts of like, well, what does that mean? So we we started looking at are we looking at one audience for a message or is it perhaps three? You know, if we look at how we, you know, we sell financial Rabobank, we sell financial products.

So if we if we have a message, say we want you to get a credit card, then is it if we if we use the the engine, it can work out the top propensity of people getting credit card. And they will they'll probably get a credit card. The bottom half won't. Right. But we still want to talk to them. But talking to them about just go get a credit card, it's going to be really ineffective.

So we want to talk to them about do the things that will make you a high propensity person. It could be understanding your credit or rewards. So you've got this thing where you say, I've got one audience anymore, we've got three. We haven't got one journey stage that they're on there, either just starting halfway through or really just finished.

We've got three. And then creatively, you know, creative variants, we know that we can there's probably better ways to speak about the same thing to the same person. So we've got three. So you go three with 3X9X with 27 x. If you look at the amount of marketing that we need to do, we start looking at our whole stack and go, what can do 30 x volume without a problem.

And it's not CDH CDH will do that. The the challenge is upstream is like, how can we build that much marketing that fast to feed it? And and one thing I do want to I do want to clarify, even taking a taking a step back and because you mentioned agentic and GenAI and traditional, I, as you said, that has been around for, you know, six seconds.

Um, but I think sometimes people get get a little confused and they think, oh, agentic equals large language models, Agentic equals GenAI. And maybe you can talk a little bit about what what is agentic? What's it doing above and beyond? GenAI. Yeah, I know this space is moving so fast, and everybody sometimes use the terminology in their own interpretation.

So in my view, and I think this probably recognizes in broader industry, is that when I think about agentic, the key thing that is lacking is the action, right? GenAI does not have action. So lots of times when I think about GenAI, I like to think of as a brain. It has intelligence, right? It will be able to interpret and give insights.

However, it cannot take action. Now agentic what I give a lot of times I think it just gives GenAI the the hands and feet. So now they were able to use their hands to grab additional tools, right? Getting data and getting additional insights. And now they can use their hands also instead of I'm just telling you what I believe should be doing and I will be able to do it for you.

So without that, the end to end the autonomous Execution is not possible, so just GenAI itself will not be able to do the magic. A generic well, that's why we you know, I was just talking about the title. We're talking about LLM. I feel it's a little bit amiss if we don't talk about Agentic in this whole journey.

Yeah. That's the acid test. Go around the the platforms that you use that make up your stack, and also the people that operate them. And ask that 30 question and see what answer you get. So if you say, you know, CDH, we say we've got 200 conversations, what is the 2000 or 6000? Does that give you any problems?

And then there'll be a little bit of discussion about well volume. And it will take longer to learn. Like it's fine. Like we look at our analytics. Could we analyze much more complex journeys from customers. Yeah. We could could we concurrently run all those audiences? No problem. It's that part where you turn to and we do.

We turn to our content building people and said, could you build 30 content? And they said, yeah, as long as we do nothing else for three months and only do one thing. Yeah. And you go, all right. So it's not a great answer, but that's the reality of the situation. That's the unscalable element. So we have to the hands and feet we need to apply to this to automate and do things.

And also it changes the dynamic of things. So when we're building marketing and we're regulated industry and a lot of you will work in that even if it's not regulated, it's reputationally important to get things right. So it means that you have to spend a lot of time like making sure there's no error and things look right.

So and because of that, you do all this planning and like review and sign off of effectively the mockup. And then you spend all the time trying to then build something that looks like the mockup, and it's incredibly labor intensive. But if you're if you're automagically building stuff like the cost of goods is zero.

So you could just like, stare at the production line coming out and go like, no change it. Machine changes it. And yes, now it's correct. I'll work with that. So it changes the whole dynamic of how you think about and build things. And I'm glad. Just, just very quickly I'm glad you talk about that. Now it's correct.

And then we'll be able to do it. I would say when LLM just came out, we could not do this kind of end to end automatic thing is because of error rate is too high, the accuracy is not there. And if you're truly going to do a genetic, you're really talking about accuracy has to be like over 95%. Otherwise you just you have to have too many human.

Love 95% to be fine. It's actually possible it's happening right now. We did ask. We said, what's the risk appetite for marketing error? And the answer was zero. Like, well, like there's a risk appetite for everything we do in. The bank, even with humor, you will. Not have zero money. Do you expect to get it all paid back?

I don't think so. You've got a dollar figure you anticipate for risk, but. Well, let's talk a little bit about contextualization for a moment. And this is going to get into, I think, a bit about the the role of GenAI around contextualization, even the role of CDH around contextualization. So, so and this again causes some confusion.

So what what can maybe ask or what can GenAI what does GenAI do from a contextualization perspective? And maybe what are some of the limitations from a contextualization perspective on contextualization, meaning understanding where the customer might be within their, their their journey or understanding who that customer is or whether they're happy or sad at the moment.

Right? Yeah, I can start. I think there's multiple dimensions to that question. See the whole thing, if you're thinking about a successful execution, it has several stages. First, you really need to understand your customer. I don't think we talk enough about the customer understanding engine. You really need to understand your customer. We we understand the customer before through asking customers questions and then look at their behavior. Right. Um, but I keep talking about you need to understand customer not only their explicit preference but their implicit. Now with GenAI Agentic. And while we are always on continuously, you know, monitoring, um, you probably will have a much deeper understanding of the customer.

Many things they actually prefer. That understanding is a key. And now another part in Argentina I haven't talked about earlier is the shared memory. If you have kind of a central memory shared across the customer journey, across not only marketing, I know we talk about marketing. Um, but it's really the ecosystem of marketing advice and service all together.

Then you will be able to, you know, serve the customer better and continue their understanding as well. So, um, I think understanding the customer is a continuous journey. At the beginning. Your understanding is probably limited. Then what you can do is also probably going to be more generic, right?

While you deepen understanding, you can do more. Now, um, you talk about kind of a kind of funnel at different stages. The customer needs also vary, even though that's the same customer, their needs vary, and how you should talk to them is going to be different. I think LLM is going to help with that conversation.

For example, if you are at the upper of the funnel, customer at that time are still exploring. Some of them may know what they want, some of them may just want to say, I want to see what's out there. So generating that brand awareness, that excitement is going to be key on LLM is good at doing that.

You can do that not only with language, right, but with image or even video. Actually, we did it with one of the major bank out there is using them to transform the marketing. And we notice, especially for the new young generation, it is so important. Young generation, they want to be able to be self-reliant on this kind of digital journey.

Hands on. I want to have really personalized you. Give me the key to completely change my experience. And they find out. We find out together that 25% of the new acquisition from those new generation come from the new app, where you give them more control and more personalized experience with LLM, all those kind of excitement.

It is so important. But when you move down to the journey, then it's a little bit different. Then it's about now customer feel. Now I'm interested. Then give me an offer. Give me a product potentially tailor to me. Right. So those kind of tailoring become important and allow me to ask questions to understand it better.

And now okay I made up my mind. I think I'm just going to buy the product. Operation probably become extremely important. You want to seamless execution. They don't need to do much. Hopefully they don't need to lift their finger. Voice. Right. Yeah. And everything is done. Now you're onboarded. You don't need to do much.

Your product is active now. Once you are my customer now my service is this kind of whole ecosystem. I'm talking about one orchestration. Be able to handle marketing advice and service and one place you don't need to go to all over the place. Try to find it. I will be able to handle it for you. Yeah, interesting and terrifying. But the we have. So we have tried to expand the aperture of like what are you considering for the marketing. So that because you've really got, you've got the messages you put in front of someone, which is that you're either trying to initiate a new customer journey or you're advancing one that they're already on.

And that really is your role. Like those use the messages to do that. And but then you also increasingly either pushing them to digital experiences or they're finding them themselves. And yeah, so looking holistically and saying there's no point having a really great message. Game over. Your digital experiences suck, right?

You have to look at it. And and I think that's the it's often the harder thing to see and or the harder thing to make changes on. So but there's an interesting thing is that everyone will sit and nod and go, like, should we, you know, customer centricity, what the customer is interested in, how they want to be talking about it is great, but you have to get there by making sure that the business is ready to be customer centric.

Because. Because the customer say because you work in businesses where there are a number of stakeholders in there and everyone's got products to sell. And let's talk about growth and all the rest of it. And you end up saying, that's interesting. However, I need to talk about this like, forget what the customer is interested in.

I need to talk about this. And you want those line of businesses to be intentionally myopic, like you want them to say, I want customers to adopt my service or buy my product. You want them because that makes them brilliant at their job. But you have to use the the CDH we use, as we call it, customer engagement engine to do that arbitration role.

Otherwise it ends up like customers giving you all these great inputs and you're all ready to give a great experience, but you don't. You override it with what you want to say as an organization. So that journey is tough. And, you know, we've we've ridden it for a while to be able to use data to explain how if you start to try and override the system, you you have less outcomes and you view because customers won't, they're not going to do it just because we tell them to.

And they tend to be they tend to be willful things, these customers. But we um, and you know, you need to do that. And we've had occasions where we've said, right, yeah, we can twist it one way and we can have it talking about something that we feel is strategically important. we will make less outcomes.

And we've simulated it and we've had to. Then one, we've lost the battle, we've done it and then had to come out with the figures and say, yeah, look, we told you so in a nice way. But, you know, so unless you do that, it remains like talk and everyone nods to it, but you don't do it in reality. So I think, I think the customers will is the one that has to be respected for sure.

What customer come here to do? You have. To serve. Everyone will nod to that statement and then not do it. And I and another one I will mention to you. I like what you said about kind of the organization, how we always say customer really should not learn about org chart, right? You really should be customer centric.

They really shouldn't worry about which organization does what because we have so many different voices. It's because our organization structure that way. But I know it's really hard. So lots of times I feel that science sometimes it's yes, it's challenging, but sometimes it's easier part because you still most of the time you have a relative black and white answer.

But when it comes to organization people, that's the most challenging part. But if you don't take care of that part, no matter how beautiful your science is. As long as you can eradicate the people will be fine. There's another there's another angle in, particularly in CDH. And I think this is where GenAI could help as well, is that we are generating a fantastic amount of data from the interactions of it, so that we call it the AI explainability endeavors, where we we literally we pull every interaction we have, every decision we make for the customer.

So every time someone logs in to the mobile app and they're shown a message, a banner in there, you know, or branch email app, push all that, it's all interaction. And we do 5.5 billion interactions a month. So we're running at scale. So our explainability tool that we developed like we pull all that data out and we show that that decision, like what was the propensity for the customer to do that thing and the value of them doing so?

And what were the propensities of everything that we didn't say? So we've got this enormous data set that we pull out really to to win that explainability argument, if you like. But that represents a fantastic asset to then say, how do we mine that? And how do we use perhaps GenAI to then say to the to back to the marketer, this is what you need to do differently.

Like this is a real advantage. How do you turn it into a recommendation source rather than just. I proved that the engine was working properly. To like this is how you make it better. You know, do you have a propensity problem? Do you have a value problem? Do you do your conversion suck because your digital experience is terrible downstream?

You know that type of thing. So I think that's where we'd like to push it. Yeah. I'm glad you mentioned explainability. You know, lots of times when I think about, you know, lots of times people ask me saying, what is the barrier of pushing the AI, you know, for AI adoption? I always feel the biggest barrier is trust.

Yeah. And it comes from so many fronts. You know, customer trusting you have the best interest. The recommendation makes sense to them. And also organization. Can I trust this engine. Is it doing the right thing. So there are so many angles. And without that explainability you mentioned I think adoption is going to suffer.

And the customer actually is not going to follow your message or your suggestion, even though you have their best interest in heart. Um, that's a big thing we did, actually. Recently we launched one of the app with, uh, one of the international skincare. And one big thing they want to focus on is about trust.

Um, I, you know, I assume. Trust in skincare. Well, what's what sort of trust. Is that why they've been using 19 year old models for face cream for the last 40 years. That's exactly right. I think many of the ladies here would know that even though you follow all those kind of reviews and buy the best product with, you know, best price, whatever, you still have a huge graveyard at home.

All those skincare. It's such a challenging problem. Um, but but see, the key thing is that skincare has one. But no matter which industry you're talking about, trust. If you're able to have something, the AI system you have and to demonstrate the customer can trust it. Um, you have a path. You should find any of our decisioning strategy team who sat on the third row here, I notice.

Just go to them and say, it's kind of a black box, isn't it? And watch them attack you, you know? Yeah, it's absolutely key. Well that is it is interesting like the last comment. I mean, there's a lot of things that you talked about here, right? Um, a concept of, of a black box on decisioning. And some people certainly say that.

Well, GenAI sort of a black box, right? More so than than maybe what Pega is doing from a decisioning perspective. But some of the things that you've touched upon, I think, that are that are very interesting is today there are sort of different swim lanes you can put, let's say Pega CDH into and what GenAI and to some extent agents can do as well.

So Giles, for example, you brought up the fact of creating content, you know, heck, you can't you can't create as much content GenAI can help you create more content, help you create more messaging, help you drive that. Explainability. Um, as you said before, even though you're doing CDH, but it certainly can help to do that.

Um, what you're using CDH for is you're I mean, you're doing billions, billions of of of decisions. Uh, what was it, 5.5 billion decisions, you said per, per, per month, right. At that, at that scale. And I'm sure you have guardrails around that, right of of eligibility of also compliance that, that, that, that, that's around that.

Um, you'd hope so. Yeah. I'll ask your team in the front if that's the case. Yeah. Solid are there. And I think it is going to be interesting to see in the, in the future of what role GenAI and Agentic are going to to play. Could they I mean, maybe Giles asked you because you've been working so many years with, with Pega could, could GenAI would you see a future where, uh, GenAI could start to do more and more, or the LMS could start to do more, more and more and take that on?

Or is there going to be limitations? I mean, today, I guess there are some limitations, right? I think, yeah, you the explainability of it kills it. You wouldn't just I can't see a world now where you just let it go and just say, chat with it and see what happens and just interact with it. I think you need to be able to then say, I can be assured of the eligibility to have that conversation with that customer.

I can be assured of the outcome. So we see it very much as it's the thing that enables it doesn't replace CDH enables CDH to work at its true capacity. So I think the for us, we're all if we look at the the side that side of the stack like we're happy with that. Like the investments we put into that will start to pay out when we can start feeding it more, that our investment in the agentic space is on the other side of house, which is like, how do we just produce enough to do it?

Because there's so much work to do there. It's not just about the the movement of the way we're doing stuff, but we have to reengineer how all the teams work. So we we're in a model where we have a platform, teams where we build the platforms, and then you have operations teams who use the platforms.

So there's people like hands on keyboards that then say, right, I'm configuring this conversation, I'm building this thing in the, in the, in the content engine, or I'm building this audience from a selection of dropdowns like they all become an agent, so that team becomes an agent development team.

So instead of an operational team of people, you have an agent development team and a few human QAS. So that's a huge shift. Um, you know, we've talked a little bit about how we start to view and approve things from a legal and compliance thing like that moves from the start of the process to the end, and we need to move them into the, um, the tool that's actually producing the work and the place that the customer will see it from. So rather than this rendering upstream, like literally in a meeting fairly recently, someone was talking about this step in the process and well, then we print out and we put it in front of a lawyer. It's like, you do what you print it. Like that's, you know, that has to move to the end of the screen.

That's old school. Somewhat. But I know we have I do want to allow a little bit of time for folks to, uh, to ask questions if we got a little bit over ten minutes left. Um, maybe I'll turn it over to the audience. Does anybody have questions they want to ask about agentic. CDH. Contextuality. GenAI.

All small dog. Bear attack. Over here. Hey. Thanks for. Thanks for doing this. Um, this one's for Wren. I'll leave you alone. Giles. Um, you were talking about, uh, the LMS ability to, like, synthesize data about the customer for the purpose of understanding that context. If we're doing, like, real time decisioning.

How what are your what is your advice for the amount of curation and structuring of that data in order for the LLM to, you know, provide that synthesis in time? Yeah, that's a great question. Um, I talk about accuracy earlier. I haven't talked about latency. I mean, for you to truly about this autonomous Process.

You need high accuracy and you need low latency. So be able to help the LLM to give a timely answer is very important. I would say still bring along your traditional AI. I hear sometimes people. So that's the first thing I would say. I hear sometimes people say, well give me all the recommendation. Please don't don't give all the tasks to your traditional.

I actually can help you a lot to bring that insights and give a structure. So that's the first thing I would say. And two, don't get into the LLM space. Obviously you want to make sure we lots of time talk about and talk about agent. They are getting all the glamor. But don't forget about knowledge layer below.

That's the most important part. We don't talk enough right. How do you really bring in your structured and unstructured data all in it, and be able to create the semantic layer? We talk about training LLM for agent. We don't talk about training LLM for the embedding layer. Right. I don't want to get into too technical, but those are the parts that you need to really get ready.

And there's ways you can do to ensure that when you try to retrieve the information, you can retrieve in an efficient way, if you build an embedding layer, well, right. Otherwise you're going to say, I'm going to search all of my sources. It's going to take forever. LLM. Especially now. Now we're talking about reasoning model.

There's things happening every day, right? It's really exciting. That reasoning model can do a lot. However, it's also really slow if you're going to think about it. So for you to structure that and use the right model for yourself, sometimes you don't need a large language model. I see the trend is small language models, which are trained for certain domains or trained for certain tasks.

It may become much, much better for your purpose and you don't need such computation power either. So I think finding the right tool for your problem, rather than always go for the most advanced one. Pay attention to your knowledge layer and don't forget your traditional I. Probably that's advice I will give trait.

I yeah, I think the Alan landed a good point yesterday when he was, you know, when he took time away from boasting about his chest thing like when, when he did. Because when he and he said, you know, he was he was putting that into ChatGPT that chess problem. He wasn't he was coming up with bad answers.

Then he went to a specific chess. And I mean, that was a nice way of making the point that those sort of like small language models that we want to build an audience, that semantic layer, so that we can say that they're specking out and say, I want an audience of customers who've been with us for five years and, and that type of thing to, to, to, to develop growth in my area.

Then we need to do some things like we needed to define these terms like what do we mean by growth. Like does the does the engine understand what that means. So we could but also the individual data elements that we can pick that sort of data dictionary that they can access that's relevant and made easy to understand is really, really key.

But and we want to connect that a little bit to like we have this constant battle when we're building audiences and then doing targeting. So the we would be so the marketer would say, right, I want people who like been with the bank a few years, don't have this product like you to want this product and have blue hair.

And you're like, whoa, wait, what what was the last one? Why blue hair? Like, I just feel they would and and we say, well, okay, but we've got hair color is a variable within our, within our decisioning model. So if it is predictive it'll pick it up in that. So you don't need it as a target. You don't need to lock it in as one of the targeting criteria.

You can just leave it open and let the engine decide whether it's predictable. So that sort of thing, when you're specking audiences and then you're modeling the targeting to have that interchange to say, like, you challenge it. Do you need that in there? Just leave it to the model. We can perhaps leave it out.

I think that would be super effective. Yeah, that's a really important point, right? The having it in there as, as an attribute versus using it to actually. Well, you've just chosen you've hard chosen upfront like anyone who has non-blue hair. You're not seeing this thing. And that's we don't want to be hairiest.

Real world. Are there any other questions. Looks like. You should I don't you just generate questions? Oh, I have I have some questions I just wanted I just wanted to see if other people had, I could ask ChatGPT. In fact, you all should be asking ChatGPT or Google ask questions. Um, what about so you've talked about maybe what are what are some key takeaways as we start to get to the the end of the session?

And maybe I'll start with you. Like what? What should people take away if they're looking at how are they going to bring, uh, contextualization? How are they going to bring all these technologies to bear within their, organizations from what you've you've learned at the bank. I think I'm not sure if it's a taker.

I think it's we had it as a walk in position and then and perhaps it's been confirmed, is that I think we we're better ideas than execution. Like, I think we know these things. And, you know, if we see them on the screen, like be more contextual, be more genuine, we're like, yeah, yeah, that's fine.

We know all that. But we lack the ability to actually execute on it and put it into market. So I think, you know, to reiterate that point, you know, this gives us the ability to to add that scale. We call it marketing velocity. Like we can actually add that marketing velocity with this. I think it's going to be interesting the way we we see a future where there's, there's almost a surface that the marketer will interact with, which is then calling on a series of agents underneath it to perform specific tasks like build me an audience.

And that would be a very audience centric agent that would just build that audience, and then it would call something else for for that. I think, um, the questions that we, we have to sort of answer and work, you know, within our organization, within our marketers to see whether it's the way they want is we people, we always assume that we're the start of the process.

So for a marketing, a martech platform, that you come to us and you're right. What do you want to do? And then they start specking it out with us. We might be the end of the process for them. They might have already done this work offline on their own, and they're already thinking, I know what I want to do, and I've worked it out and I feel it's a good idea.

So we need to be able to ask. Surface needs to sort of take information that's already been done, rather than just saying, right, start from scratch with us. And I think that's going to be interesting. Um, I think a takeaway is Pega is kind of well placed, and it's going to be interesting to see whether it capitalizes on it.

It's got years and years. I mean, it's a 30 year old company. It's all business process management. It's all workflows like that. And I and a genetic I is kind of powerful, so it'd be interesting to see how they do that. Could Blueprint be that surface? Maybe it could be in with a shout. We'd have to have a look.

So yeah. No. That's great. Ren takeaways from. Yeah. I mean, um, when I think about it, I will reiterate the trust I mentioned about one. I truly feel there's so many angle to it. Um, now, there's also this kind of capability, um, I think how do you bring your organization along and allow them to build some outcome that your organization can trust so they can invest in it and make it real?

It's really important. And make sure you earn your customer trust along the way. There's there's so many things that's possible. And there's a new possibility coming out every day. So, um, that's that's one. One thing I will mention now when it comes to Pega, I do agree Pega is at a unique position that can harvest all the power of that.

They are already at a place they can become an even stronger enabler. I know they kind of integrated. How do they integrate into more capability providers, content generation, other things? The whole thing about Genentech is not about you doing everything, even as a system or platform. It's how you're able to providing the access to tools and data to enable your user right, be able to build agent.

And how do you integrate to other platform, to how to say to present your experience even within either within Pega or outside of Pega? I think having a system that connected created an ecosystem beyond yourself probably is going to make Pega even more powerful. I think Asian now will be able to leverage all the connections they they build and make it shine even better.

So I'd love to see Pega do more of that to empower their system. They need to do less of the content generation because that is terrible. I don't think they need to do more of everything. They just need to figure out what I should focus on and bring other expertise. Connective tissue part is the is the good bit. Yeah. And that's a I mean you brought up a really it's a it's a really interesting point and really interesting takeaway right. Is that what agentic as we switch into genetic what Agentic allows you to do is really take advantage of lots of different solutions within the ecosystem. Lots of different capabilities that many of your organizations have already developed over the years.

Right. Or or being able to maybe modernize it so you can take take advantage of it. Well, that's the risk. The risk is you look at it and go like, cool. Finally we can paper over the cracks and and then you look at. So we're trying to temper that by saying, this doesn't allow us just to paper over a load of bad systems that don't really work properly, like we have to do make some choices.

I think the generation the Gen I it it doesn't solve the problem, it moves the problem. Right? Agentic potentially fixes the problem. Yeah, well, and you're right, you don't want Agentic to be RPA on steroids, right? That's not what it was supposed to be built for. Um, but the idea that that especially with, with SaaS solutions like Pega, like figuring out where do they fit because you're not going to be able to do everything right.

You're not going to be any and it's not just any any company. It's not going to be able to be everything to everyone. So where do you fit within? Within the. Obviously you haven't been in any of the vendor pitches that we've been in over the. Years. Because I can assure you that's the that's the opening gambit of everyone.

Thank you very much, everyone. And, uh, fantastic discussion. Thank you.

Related Resource

Product

App design, revolutionized

Optimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.

Share this page Share via X Share via LinkedIn Copying...