メインコンテンツに飛ぶ

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 42:46

PegaWorld iNspire 2024: AI as the Next UI – Driving Intelligent Straight Through Processing with Intent led interactions

Discover how Pega's AI-powered solutions are revolutionizing user experience. Explore the replacement of traditional interfaces with AI for seamless interactions. Learn the intelligent by design & manual by exception strategy for effortless task automation while maximizing ethical and responsible human-AI collaboration. Gain actionable patterns for incremental Pega Gen AI adoption within your organization.

Thank you, everyone for joining the session today. Uh, looking forward for a great interactions because we decided that we will make it more interactive today. We'll try to I think, uh, but it's important to know that, you know, you had so much amount of information today morning with a lot of great keynotes and a lot of learnings that came through in the morning. We would try to live up to that expectation and ensure that you also have a great session today with the great insights that we could bring to the table. But before that, I think I'll just start with a small video. It's a short video while people are coming in. Uh. So that was a quick introduction to the Infosys Topaz brand. It's been a time that we have invested in the AI for, you know, creating the assets that are now part of the overall brand brand that we have.

Uh, the thing is that today, what we are going to discuss has a foundation, uh, towards that. So we are going to kind of discuss about how we can actually leverage AI across your enterprise ecosystems, but with a more pragmatic approach of how we kind of get it rolling into our enterprise. Right. So that's something that we would start with. So quickly before that, I will kind of introduce, uh, you know, to the topic the AI as a next UI. Uh, I think the, the kind of, uh, session that we would be having is focusing upon how you can bring AI to replace the user interfaces that you have today. Now, those user interfaces, uh, that we have today can be in multiple fashions as you are using it for data capture or decision making. We'll go through the whole session and kind of introduce you the concepts or the approach and the patterns of how we can drive that change. Before that, I will just move into a quick introduction.

So we introduced the Infosys Topaz brand, which is all about having to have AI assets collected together, a trained models. But I think as a Pega practice, we have been in the relationship for more than 16 years now. Uh, around 7000 BPM and RPA experts plus Pega resources, which are around 3000 globally available, right? So that is something that we are proud of as a practice. We have quite a lot of successful customers engagements that we have done and look forward to engage with you further as well without being, you know, delaying it further. I will give it to Brad, but you introduce yourself and then take it the Telenet overview as well. Hi, I'm Brett. I'm director for the Pega delivery at Telenet, which means that I have a couple of teams who develop, test and bring Pega Customer Service to life within Telenet. Um, I like this slide because it shows like we've been really busy and I think we have been.

But first on the right hand side, just so you know, we're part of Liberty Global. Our flanker brand is not on here but is as equally important, especially nowadays which is called base. I like to say we're the best telco in Belgium. Some people in the room might disagree with me. Um, but anyway, um, if you look at our Pega journey, as you can see, I like the rocket because it didn't always feel like that. We started back in 2017, and that reminds me of how old I'm getting, um, with, uh, a huge transformation program, which took us a long time to deliver, where we started off with CDH and Pega Customer Service. Back then we were on seven. So just if you look at where we are today, it's taken us a while. Just some highlights, I think, which are really important for us.

I think the virtual contact center part where. Kate, can you raise your hand? Kate. Kate, this is Kate. She has been a responsible she still is responsible for, for the VCC program, which is also revolutionizing a bit on how we deal with our interactions towards our agents and where we try to integrate with Pega Customer Service as well. Um, we've done a bunch of, of, of cool stuff also in the B2B space. So we did, um, a small implementation of customer service within B2B. We've done sales automation in B2B, and then a very important one for us, which we called in the company as more for internal marketing segment of one is the implementation of CDH, uh, which we have now implemented across all of our channels. So we actually have a really nice use case now in our app where CDH is driving customer loyalty, something we're really proud of.

Now, as you all know, whoever's been dealing with Pega for for a long time, we're not on cloud yet. So, um, you know, we're still on prem, so we've been upgrading a lot. And to give you a bit of insight, because 24 is not on here, we're upgrading hopefully this year to 23.1. Uh, and next year we hope to start our cloud journey. Uh and also then catch up with all the latest and greatest. By the way, I'm happy there's so many people here. It might have to do with the fact that AI was in the title, I think. Um, so this is a bit us. Uh, if you have more questions on that, feel free to reach out to me.

Uh, we since, as you can see, we have been busy and we're happy on where we are today. Sure. I think with that, I think we'll just kind of open the floor. I think what I wanted to kind of take a take some time off was, is to discuss about with an AI adoption being so rapid in terms of the demand in the market as well, what do we think is the full business case very important and critical for AI adoption, or should we experiment and keep experimenting until we see the results? What is the kind of thing that we think or you think need to be there? That's the kind of interaction we want to have in this particular part of the session. But you want to add in your experience on that as well. So I think, um, if you look at Telenet as a company, we like to experiment. So we, I think, uh, within our architecture team, we all love the new gimmicks.

We love the marketing talk. I heard GenAI four times in Karim's, uh, session. I think it will be repeated quite a lot here. Um, however, I think, uh, the visual here kind of shows it, uh, I like to do POCs, but I also want to get value out of the platform, uh, and bring direct value to our customers. And I think there you need to look at that value and then craft your roadmap Accordingly, experimentation is great and it's fun, but I do think you know you need to balance it out with delivering actual get making actual money, saving costs and generating revenue. But I was wondering how many people here in the room are actually experimenting today with GenAI use cases towards our customers? Oh really? Okay, see? That's not.

You got some hands? Yeah. That's good, that's good. And how many of you actually have, like, a value driven roadmap for it already? Okay. You're not you're not surprising me, Kate. All right. So I think from our perspective and Kate, obviously she's the one who is who is helping drive this roadmap. I think the way that Telenet approaches it is really based on on on value, because we're in a tight market.

We want to fight for every customer to stick around, but also look at how we can make more money and do more sales. So and whatever automation capability can help in that, that is how we approach it. Absolutely. And we should not go by the color of the balls, because the gold is something that people will pick quite often. But I think the thing is that, yeah, I think that's right. I think there has to be a balance between how many experiments that we do, but also ensuring that the value is coming out through the experiments. And then we have a time where we actually move into the projects and deliver that as well. So moving to the next topic, I think this one is something that I would like to add, and Breton myself will actually discuss this topic further as well, that in today's world, you know, in today's world, we have seen the past of how it has been, right? We had a lot of manual documentations that we moved into likes of SharePoint or the document workflows that we knew.

So there was an era of automation that we have seen in almost a decade and a half. Right. And then we've seen a complete new shift into process management and RPA coming in. And that's where we saw digital coming in more, where we talk about accessibility of whatever you had in the physical world becomes more digitized inside the system. But I don't believe that we had got enough automation in that. When I say automation in terms of being more clever, I think what we did is we got the physical movements of your process into and digitized, uh, you know, ecosystem. Uh, and we actually had the process movement with some kind of a smarter rules engine injected there. But I think our overall requirements, design and execution was based upon a manual intervention. We think about when we do a business process management, we start thinking about what is the UI screen that I'm going to build it, what is the screen and how it should look like, you know, what are the next flow screens, etc..

So every time it is all about discussing what is the next manual interaction that you are doing. And I think that is something that is limiting us from being innovative in terms of getting the process fully automated. And obviously, it was the right thing when we did that because, you know, the AI was not so mature and the technologies were not that mature to kind of bring that necessary cleverness or intelligence. But I believe that this is the time that we can think slightly more differently to bring AI, not just in the vertical use cases, but bringing more horizontally to see how we can make those UIs that we see today slightly more better and intelligent. What do you want to add on that? Yeah, I think if you look at this slide, um, there's a lot on there, but it kind of shows a bit where we are as well. So if you look at it, our BPM. So what we're doing in Pega Customer Service, we've built our cases starting from an agent's perspective. Obviously you start from a customer perspective, but you only focus on that one piece of work that goes to your agent.

So that's how we approached it. Now, if you look at today on how the world is changing. And, you know, we get competitors on the market who are digital first. So we kind of need to catch up on that. And I think there are, um, two things I think one, obviously you want to try and evolve this picture, uh, to like automating a lot of those flows, offering them in whatever channel that you want to. And then, you know, also use AI in your human assisted channels, uh, to make sure that the agent can focus on a conversation and have all the knowledge or use cases at hand, plus use all of the data that's in your system. Anyway, um, the second piece that I'm that I'm curious on as well is, is, um, if you look at what the scale of this is and also on the slide that Karim showed, like these huge, uh, architecture, I think you said it, it could have been us, right? It kind of looks the same. So I'm very curious in our software development lifecycle.

How the GenAI capabilities offered in Pega, but also in a myriad of other platforms, to be honest, because everybody's shouting it, are going to assist us in doing that transformation. That's one thing. And second thing is, what is the amount of change management that you'll have to do in your organization with regards to architecture, road mapping, but also reskilling up your people and what are the benefits that will be brought there. So yeah, we're in this area and I feel like we're in this era. I feel like tomorrow the world is is going to change. It's changing today already. But I'm just curious on how to catch up. Thank you. So then what's what's lies in the future?

So what we saw is about what is the era that we know today. But when we see in the future, we want to also start looking at how the interactions are going to change. We have seen in the era of, you know, for example, I have a touch screen on my iPad and my phone, and if I open my laptop, I get addicted to even touch the screen on the laptop. That's the level of addiction that you would see at the consumer side would have when it comes to interaction naturally. So today, the screens are quite mechanical in nature. You got click drags, you know, you got text boxes, you got fields. But I believe that there will be a future or, you know, near, you know, some some time that you will see the interactions will be more natural. People will come to the website, expect that natural conversation like you type and say that what can I buy? Uh, you know, for my new home.

And you can see all the products coming up. So that kind of a nature of interactions will come in in along with that, you will also see a lot of, uh, movement into IoT or system interactions coming up, which is all about having to raising a set of events that will come into the systems. And you have to take the action. Now, when this happens, when this industry forces come in, we don't believe that there will be time and effort for any manual interactions to happen at the, at the at the back office or front office level. What it means is that we need applications that are truly AI native. Which actually brings as much straight through processing as much as possible. Because if you don't have that, then the amount of change that is happening at the user interaction level, we will not be able to cope up to the experience demand that the users would have in the coming future. But when we look at the layers in terms of how things would change, the first part is the interaction. The interaction is all about user interaction in terms of user experience, and we believe that the intent capture is the layer that will come into in act for the enterprise application.

So capturing the intent is more important than just giving a UI to a user and saying, please give me the information and then give me another information and give me another information. So capturing the intent is very important, and the goal is to be reached by the system by just knowing the intent. That's the level of automation we are looking at. When the events are coming in, either through the interactions or through the, you know, other channels of system, you know, chat, you would also have to build something which is context understanding. So events plus AI will give you context. Situational awareness. It will tell you why the things are at a particular time and why you know, in what context, rather than you trying to find out and analyze through a sets of data, reports, etc. and try to make a decision. So you would have a context understanding coming on in the layer.

Then you have rules and orchestration that we still see today to be combined with AI, which means that it will have to be an autonomous movements of action. So AI combined with the UI, AI combined with the events gives you the context, and AI combined with the orchestration and rules will give you autonomous actions engine. And then you have the data plus AI. So data plus AI will give you more knowledge hubs and knowledge ecosystem. What it means is that let's say if you had a survey results from your customer service side, that means that the survey results as a bare minimum data in a data lake doesn't mean anything unless you're talking NPS. And that is the knowledge. So the data plus AI continuously will give you the knowledge required to kind of run your autonomous action engine to satisfy the, you know, context and the interaction. So that's the change we would see. And truly from the API standpoint, which is the integration part of the world, we would also see API plus AI.

Now what does that mean. So today if you have API's which are more of XML and data contracts that are getting exchanged, we would expect that you would only have prompt interfaces tomorrow. So the API may change into something like API where you would have, you know, applications not exchanging XML data, but exchanging just prompts. And when that happens, Your application still need to be doing that necessarily straight through processing, because the experience is rapidly changing and it's demanding a lot of execution at the application site. So that's the change that we expect to happen. Brett, anything that you want to add on this. I'm really intrigued by your last remark on how APIs will change their behavior, because it really means that in your architecture, you'll have to figure completely new ways to do so. Now, looking at this picture, coming back to what I previously said, I think this will enable us a lot to to come to better from a, from a business outcome to better interactions with your customers. Because if you can use your data and, you know, have better automation using AI in multiple shapes and forms, whether it's decisioning or as GenAI, yeah, I'm pretty sure that that by doing that you can not just save your cost, like I mentioned before on your call handle times, but also offer very, uh, you know, very personal and automated experience towards your customers, which will increase your NPS.

So I think there's a there's a lot here that everybody wants to do. Yeah. I'm just very curious on the evolution on how we can actually implement it. And that's that's going to be like the absolutely the main, the next big thing to be for me. Indeed, indeed. And that's where we want to kind of, you know, move towards when we also discussed about the Center-out, uh, application approach or the architecture approach. We would expect that AI will change the game at the core. And when we look at the components at the core, I think events, you know, rules, orchestration as well as data and APIs play a very significant role. Now, what does it mean when we move this part, when we absorb that particular architecture and the layers and then we say that, okay, then what does it really affect and how does it really affect in terms of the applications that we have today.

And how does it change. So we are bringing a paradigm shift, right? The paradigm shift is manual by design and intelligent sorry, intelligent by design and manual by exception. So what I mean to say is that when you're designing your screens, you have to think about how intelligent your application have to be. What does it mean that when you're having a screens, you actually challenge yourself. You challenge yourself saying that, do I really need this kind of an UI in this fashion? Or just capturing the data and not having enough intelligence? And then you try to bring AI into picture. Let the AI do the required decisioning, the data analysis, the data capture, and do the right decisioning on behalf of the agent, and only when the agent is required as a in a in a human in loop fashion, you bring that necessary angle into the picture, and that makes your application more intelligent.

So you have a complete workflow or a Or process or complete experience, more intelligence driven versus having to have the UI in front. And then you have the process automation going to back end. When that change happens, you will see a lot of change in terms of how your applications behave, because it will be further more straight through. It will be more responsive. It will be have a consistent results. It will have lesser errors in terms of what the agents would have worked upon. And it will give you a completely new experience, not just for your knowledge workers, but also for your customers. So that's the change that we would see. So intelligent design and manual by exception, is the paradigm shift that we would see as part of the architecture change we just talked about earlier.

But anything that you can add. Well, I have a question. Is there anyone who's starting a new project today entirely new. So who can start adapting this in the room? I was just wondering because I wanted to talk to that person. But I think this is this is, uh, basically architecture that a lot of companies will want to adopt, especially when dealing with customers, like I previously said. So I think there's a lot of opportunities there, um, to get that going because it will it will bring a better experience for sure. Anybody you have to have a comment on this one, because this is something very key as a transition point in our session today. But anything that people have any question please feel free.

Raise your hands. So yeah go ahead. Yeah we will be. Yeah we will be. Yeah yeah I know that it's a patience losing in terms of getting some more practical stuff going. So that's okay. But I think it was built to kind of build the context of where we are going and why. Because we are not going to show you a lot of vertical use cases, but going to show a more horizontal one as well. Right?

So get ready to be amazed. We're about to revolutionize customer service replacing interfaces that needs manual intervention with the power of artificial intelligence. The journey begins. A customer, frustrated with their internet connection reaches out to customer support via WhatsApp. Pega AI powered chatbot springs into action, instantly responding and prompting the customer to explain their issue in detail. The customer explains that they are experiencing internet connectivity issues at home. Pega AI listens attentively. Understanding the nuances of their problem, AI takes charge based on the customer's input. Pega's Gen AI identifies the issue and product type automatically, creating a troubleshooting case with all the relevant details.

Pega Knowledge Buddy a vast repository of knowledge is tapped into Pega's Gen AI extracts the top five troubleshooting suggestions tailored precisely to the customer's situation, empowering them with self-service options. If the suggestions don't work, Pega's AI doesn't miss a beat. It uses sentiment analysis to detect the customer's frustration and automatically routes the case to the technical back office team. When the case lands with a technician, they're not starting from scratch. Peggy's AI driven analysis provides a comprehensive view of the customer's journey so far, including the self-service steps they've already taken. The AI goes a step further, providing the technician with a list of advanced checks and actions, guiding them towards a quick and accurate resolution. This eliminates guesswork and streamlines the troubleshooting process. Once the root cause is identified and fixed, Peggy's Gen AI automatically generates a detailed case summary, saving the technician valuable time and ensuring accurate documentation. Peggy's AI crafts a personalized resolution summary email for the customer, keeping them informed and reassured in a timely manner.

With the issue resolved and the customer satisfied, the technician closes the case, confident that Peggy's AI has facilitated a seamless and efficient customer service experience. This is the future of customer service powered by Peggy's AI. It's faster, smarter, and more empathetic than ever before. As you see AI at multiple interactions in journey, does the necessary decision making on behalf of the agent and eliminates the need for user interfaces screens. This is how AI is the next UI, while driving a paradigm shift into process automations with intelligent, straight through processing. Even you can create applications that are indeed intelligent by design and manual by exception. Get ready to experience the difference. So who wants this? Yeah.

Go ahead. Pega. Yeah. How does it get. So. So that is you need to fine tune the models. If you repeat. The question. Sorry.

Can you repeat the question? Sorry. Go ahead. The AI the Pega AI in the context of the client's context, for example, the client might have like five data centers. Also some relevant context for the client. How does it get that out. Yeah. So it's so there are two parts. One is that the Pega is anyway, the GenAI is fine tuned for a certain set of troubleshooting, uh, you know, journeys for that matter.

But the enterprise context will have to be further fine tuned for the models. So that will depend upon. So for example, we take an example of a telecom organization. Then as you mentioned there would be assets and inventory you know associated to that. And that have to be fine tuned. So the Knowledge Buddy that we have today, uh, that we shown that were being populated by the operation instructions for that organization, and that's where it got fine tuned to kind of respond. And so on. So same thing. So I think we are doing it for another one of the largest bank as well in in UK.

So what we are doing is we are taking their knowledge instructions across and putting them into, uh, into the copilot mode and that actually fine tunes that. And then when you have an agent working you will actually get that. Yeah. So the so our models will only do the, uh, you know, at the base foundation level, for example, understanding a balance sheet or understanding, uh, you know, trade, uh, exchange. But for those customer invoices etc. need to be still be there. So that is that is something that you need. Yeah. Any other questions on this.

Because we wanted a practical demo. So the demo actually showed a lot of places where Pega AI is being used to also augment the traditional user interfaces, but also to replace. And one thing key thing that you would have seen through the demo is that the agent is actually not typing anything. You saw that, right? So it was always a selection. So we are actually moving the knowledge worker into a mode of mainly decision making, rather than having to have a lot of task movements of data capture, you know, moving the things from here in one place, one screen to another. So you see, it was always an insights that were being shown in that by the, by the system. And the agent was only selecting and you know, saying that, okay, this is fine or this is not fine. So that's the level of alternative mode that we want to take the agents to, rather than the focusing upon a lot of manual actions.

Right. So that is yeah. Sorry. Go ahead please. Training the model. So I think the same same question right. So the training I think I don't know if you heard so. Much of data. Yeah.

So it depends on the use case. So one use case that we just saw about is to train the train the models to kind of troubleshoot a particular, uh, you know, journey of a customer. Right. So you have a complaint on an internet, uh, you know, fault. So you actually train the models to actually first get that information into, into your Knowledge Buddy, and then it actually does does the response. So similarly, you would have multiple search operational instructions for different journeys. That needs to kind of be fine tuned with the model. But that you can do incrementally. So.

But you will have to be careful that your journeys don't start using the Knowledge Buddy where you have not fine tuned it. That may actually create more hallucinations for that matter. Yes, sir. How would that agent and customer interactions. Sorry. So it was a WhatsApp channel. So it was on the WhatsApp channel. So when the WhatsApp natural conversations were going, uh, The agent was actually having, you know, automatic, you know, notifications and that interactions was happening on the WhatsApp. But that's a very fair question, right.

So we still could show the demo because we had to show some screens. If this would have to be fully automated I would have not had any demo to be shown. So it would have been a complete one click of a button and then just natural conversations going. So the idea was to kind of automate as much as possible. Further, even the selection could be more scoring based selections that you see, and then you would not have any kind of an agents working on that in future. So you would have a direct WhatsApp communication and instance of AI creating and the process and they all responding to the WhatsApp and you don't have any agents to I had to do it because I had to show something on the screen for that. But do you want to add something? I think the demo as shown kind of wraps up on what we have in Telenet. We have digital messaging as well.

So this demo was quite relevant. I'm very curious on how we can get these kind of things to life, obviously, but it's super impressive to see on how you can literally take away by using automation, have a focus on decisions, and doing the right things for your customer, whether it's selling or it's servicing. I think this this will take a lot of the cognitive load away from the agents, and you can train them on the relationship, which is, I think, quite important because most agents, I don't know how it is with people in the room, but they have. How many systems was it again? 15 in front. Yeah. So they're already switching between systems. So the more you can automate there, the better and the less they have to manually fill out the better. So I think that's a very good point that you mentioned, because I don't think that the agents that we have in the organizations really want to do a lot of manual activity.

If you ask them, what is the best thing that you want to do, they will say that I want to interact with the customer. I think the interaction with the customer and the and bringing the right empathy to kind of get that, you know, helping them. Is there more, I would say advantage for them. And the thing that we could do is to kind of get them in that particular driver's seat, where you can only guide them with the insights and let them have the discussions with the customer rather than they, you know, fiddling around with a lot of applications to get the things moving. So in the example that you just saw, you know, while the customer was responding, there was a time when the the sentiment analysis kicked off. So basically, even before the agent was given a particular insight by the customer, they were already, you know, you already analyzed that the, you know, the customer is not happy with the result and you actually drive into a troubleshooting use case. So that kind of a change in the flow is dynamic in nature. Sorry, I can't I can't hear you. There's a mic here if you want to use it.

Just sorry for that. I can use it. Yes. Yes. The agent is responding to a certain set of insights that the AI is giving as an options, and actually moving the things. So that's where the author is now. You have a you have an option to go a step further where you put AI also to evaluate those insights and take the best one and move ahead. That's why I'm saying I did not show that because then I would have a screen to show. So that was the reason.

But you are absolutely right. There is a there is an option to do that. Okay. So now when we are doing this, when we discussed about how the US can be replaced or augmented by AI, and that's the that's a fashion that we should actually look at our requirement design and analysis. But I think slightly being practical enough, uh, we try to kind of put up some patterns in which the US are today used so that you get some kind of a direction in terms of okay, let's say if you have a data capture through observation or you have a decision making UI, or you are just flagging a status update, or you're just looking for some information, or you want to initiate an action, you want to try to understand the context or you understand the complex data, right. So your use of, you know, screens and the user interactions may not be limited to these patterns if you have more patterns. Fair enough. And then we can have a particular discussion of how AI can augment. But we thought that it's good to kind of put up some patterns here.

And then the solutions of AI, which can help you to either replace them completely or to augment that, uh, you know, in a copilot mode. Right. So that's the that's the way that we think, you know, we kind of go through the user interfaces that you have in your program and then look at the patterns in which they are, so that you already know that the solutions of AI can be really applied. And then we are doing your a business analysis and requirements. You know that if you are a business analyst, is looking for a user, interfaces in for a data capture, then you know that you can challenge it in terms of, okay, why are we not using channel analytics or sentiment analytics or voice to text kind of a thing? So that is something just to give you an example, right. If an agent gets a call and you have a screen in front of you where you actually captures the data, just because somebody is calling, that can also be converted from speech to text. Already the text can go inside the screen and you just have to click a button. So there are multiple options in which AI can help you to kind of cut short a lot of keystrokes that are going on and work on behalf of the agent.

We just need to keep on, you know, enhancing these patterns and these patterns becomes your regular discussions in your requirements and design parts of the session. But I think if you look at those different type of solutions, it's a myriad of, of new options to, to improve your journeys. Ernie's. I think it will also enable us to reach a bit more the classical IT versus or IT with business discussions, because by doing this you will automate a lot. I think we saw Pega Blueprint as well. If you put that underneath to to automate your, your blueprinting process, you can very quickly prototype with your business or have it done by your business, which will then increase your delivery and value to market. And I think the combination of all of those things, um, are potentially setting everybody up for success. Um, that's how I see it. Superb.

I think sorry. Yeah. And there is a quick view on what are the different types of AI use cases. Then you can look at as a point of view, right. So while we looked upon some of the patterns in which you can replace the UI, uh, and, and actually get the AI, but we just kind of put up some of the use cases, the case summarization actually helps you that you saw in the demo to not to have the keystrokes for creating a completely new case summary, or you have conversational web forms using AI where the the UI screens change dynamically based on the natural conversations that are going. You got agent training that can also be converted. So you have actually driving the, uh, the instructions, uh, so that you actually have the agents trained much more faster. So some of these ones are something that we just added, but that you want to add from your experience. I think the ones that, uh, you know, we are very interested in and again, it's very early days because we're still crafting our roadmap.

But like I said in the I think the third slide on on the balance between value and experimentation, I think things that have to do with agent experience that high on our list. Uh, so how can we make sure we make the life of our agents better? Like I said multiple times today on how can you make sure they can focus on the conversation instead of the system? Um, so I think those if you club them, they're so summarization agent training, those things are really big ones for us where we really want to investigate on how can we bring those to life on our platforms. I don't know in the audience if anyone else has some, uh. Yeah, anybody has a viewpoint and we can actually would like to hear as well, but. OK or anything that someone. Yeah. Go ahead.

Sorry. Go ahead. Generally how do. You balance if you're saying we're going to not building these screens anymore. How do you balance that with every once in a while I might want to audit what is not doing. How's it going? I think it's great that you don't have those screens anymore and taken that way. So there are two parts to it. One is the auditing of the process in terms of how it is progressed.

That's part of the Pega anyway, is a foundation, so you can always audit the execution. Then we are also bringing and we'll cover some part of it to bring explainability of the AI decisions. So when the AI takes a decision, you also want to know why and what. And it actually decided through an approach. So that's part of the AI explainability as a as a design that you will have to build in. So I'll explain that in the next slide as well. But that's a very key point. When you remove the screen because you were knowing what you were actually doing, the agents were doing, then you need to know what has happened after a certain set of results and why it has happened to a certain set of actions that have been taken. So that comes part of two things.

One is auditing part, which is the Pega already has an out-of-the-box capability. And then you also have to bring in explainability part as part of the AI. So the effort so I don't I can't comment on the effort as in number and quantification. But the complexity, the complexity will depend upon how much amount of enterprise context you are bringing. So I think somebody just mentioned there are a lot of enterprise contexts that are bringing your models have to be tuned that many. But if you are using quite a lot of out-of-the-box capability of Pega, then you are much more easier to get it out of your out of your way. So it all depends upon how much complex your journey you want to make it and the decisions you want to take, and that will actually require that much amount of tuning. From captain will require if you are also interacting with other different applications, which is which are actually going to participate in the overall GenAI. Okay.

So the last but not the least is to bring responsible. I think we talked about, uh. Yeah. So, uh. You know, we, we kind of don't put it on the website so that you can come in the booth and discuss with us, but that jokes apart, I think, please, please come to the Infosys booth and we can help you with that. Definitely if you just want to. And the last but not the least, being responsible. So everything that we do today, uh, you know, with an AI, you know, implementation, I think we got a very good question earlier on the explainability part. Uh Infosys.

As in, uh, in the Topaz brand, we have the complete framework and the guardrails to create that explainability and and ensuring that the hallucinations are the minimum. And as well as we are not taking any drastic, uh, you know, anti-human, uh, decisions. So that becomes a very, very core to our design and our ability to work with our organizations and bring that, uh, responsibly. Now, that is also because I don't know if anybody knows, but there is an EU act that is coming on the AI side of the world, and that is going to bring a lot of regulatory, uh, you know, uh, I would say monitoring as well. And that means that we need to have something like this into our application so that we can respond to the regulations and, and the, you know, and your own auditing ecosystem as well. So that's something that we are bringing onto the table as well. That was the last sales pitch from my side, but that was something that I still had to cover. But anything I think that the European legislation is uh, it launched recently. I think it has a risk based approach.

If I see this slide, you guys read it very well. Um, and I think it's important that, that we use this responsible because you're, you're talking about data. Uh, you're also talking about the capabilities that GenAI has. And AI in general has not just in this space, but also politics and other things. So I think it's important that there is legislation that at least, uh, takes out those things by, for example, already saying this was built by AI, which will already help a lot. I think. Absolutely. All right. We're a bit over time.

I think. That's fine. I think we have we have, we have, we have done with the session. Uh, but any questions right now we can take, uh, please feel free. Yeah. Please go ahead. Please. Based on the. Challenges facing.

Challenges. And based on. Sarcastic comments from. Okay, okay. No, I think there are two parts, right? One is the feedback analysis can also be AI driven. So when you have the feedback when you're recording, call recordings can be also analyzed through AI. We did not touch based upon that topic, but I think that is also one feedback analysis that we can do on the sarcastic can also be done with the natural language understanding. So that's where the sentiment analysis kicked off.

And then you can take the next course of action. So thank you everyone. Thank you for your time. Really appreciate that.

関連リソース

製品

アプリ設計の革命

Pega GenAI Blueprint™のパワーで、ワークフロー設計を迅速に最適化します。ビジョンを入力すれば、その場でワークフローが生成されます。

シェアする Xで共有 LinkedInで共有 Copying...