PegaWorld | 45:16
PegaWorld iNspire 2024: How Can Generative AI Improve Learning at Pega?
How do your developers get the most out of Pega? Join this session to see how generative AI is being used in Pega Academy to create personalized learning – by tailoring the content, pace, and application of concepts to your reality, with feedback-driven learning experiences.
Our goal is to provide learners with relevant and challenging material, realistic scenarios to apply their knowledge and skills, and constructive feedback on their performance. Hear how we are innovating in this breakout session.
You've heard a little bit about Socrates on the main stage, and that's really what we're going to be talking about today. And in fact, a year ago when we launched the big initiative of we're going to double developer productivity, the next piece was we have that in the product, and we have a lot of capabilities that will help. But what can we also do from a learning standpoint to help drive developer productivity, so that you all, as our audience, will be able to have faster implementations and more effective based on the teams that you've got working on your delivery. So quick introduction, I'm Kate Lepore. I'm in charge of learning strategy and solutions as part of engineering, and that includes working on both our content, our certifications and documentation for the Platform and robotics team. and I'm just thrilled to be able to show you how we think we're profoundly changing how you're going to learn Pega. You can hit the next slide if you want. You can see our big pictures. Yeah.
Hi. And is it good? No, the other way around maybe. Okay. There we go. There we go. Um, so, yeah, I'm. I'm Mark, I'm the senior director for AI innovation and enablement based in Amsterdam. Hence my accent or little accent.
Hopefully by now, um, I've been with Pega for a long time, since 2000, basically, and I really love the enablement space and also the whole AI stuff that we're doing. Um, my first encounter with AI was back in, you know, even before 2000 when I studied computer science. And what we started to do is develop the Next Best Action. So if you're familiar with the other product that we have, Customer Decision Hub, that's basically the decision engine that we use to develop in Amsterdam. Actually, we're still developing, but I'm not personally doing that anymore. So, as everybody knows, ChatGPT took off basically a year ago, a bit longer than that. And that really was a game changer also for us in enablement, because AI and enablement is a perfect match if you can, if you can imagine, um, because what AI allows you to do, it generates text, but also images and videos and voices and the other way around. And that's even cooler nowadays. Um, so we started to experiment.
And the first thing we created is self-study, buddy. I'm not sure if you guys are aware of it. If not, check out the Knowledge Buddy because that's now the product we have. Last year we did the demo and now we productized it. And we're very excited now to share the next innovation we did basically using gen AI. So what we have for you today is a little introduction. Uh, Kate will, um, explain, you know, the, the, the problems we try to address using a Socratic learning and the new methodology. Of course we give you a full demo, and after that I will drill down into the technology to explain a little bit the architecture we built in order to make this happen. Excellent.
So as we've talked about what we think we're doing is really revolutionizing revolutionizing learning. And we will relate this a little bit to the research that we did that looked across the spectrum of learning. So from education, meaning, you know, primary, secondary, higher ed, corporate learning, what's going on in in customer education as you will. And what were some of the major problems that are going on that we thought we could apply GenAI to address? So just the first thing that I looked at and we looked at, to be honest, is that there's a there's a skills gap. We are trying to drive the best knowledge in the field on how to use Pega, and this is actually across the board. Most of learning today is driven based on time based, um, opportunities. So you have a one day workshop, you have a five day class, you take a one hour certification. And all of these things are time based.
But the reality is some people don't perform so well in that same time period. Some people like to go faster and they're able to do things, you know, they know a lot. Other people need more time. So it's not so much that people are better or worse at learning. It's really a matter of time. But our the structure of our learning and our certifications and things don't allow for that. So one of the things that we're trying to address is the time constraints and the pace that students prefer to take their training. The other piece that we feel we can address is the fact that because of the time, people end up on this bell curve, and so you have people at the low end of the curve because they didn't have enough time to do to really master it. And then other people who yeah, they they probably were done way ahead of everybody else and they were at the high end.
What we'd like to do is get everybody up to the mastery level so that they are all capable of of being expert on Pega. So as we look at that, we also recognize that in our current format, we tend to forget a lot. So you go through particularly eLearning, it's it's sort of a binge education. You've gone through it and then you get out and a week goes by and you're sort of trying to remember exactly what what you did and how you did it and so forth. So how can we improve the retention for our learners? One of the challenges with e-learning, if you've taken either pega.com or pretty much any e-learning, it's kind of passive. You go through, you read text, you watch a video, you listen to a lecture, all these formats, you're not engaging as heavily. And, you know, we're adults. We have partial attention.
So at a certain point, you're only as engaged as you can be. And even some of the ways that traditional eLearning has tried to make interactions, you know, you you click on the little plus sign and you get a little bubble of somebody, you know, think about compliance training if you haven't taken anything recently. So how can we how can we get past this forgetting curve, which, if you're familiar with it, basically says in a day you've forgotten a majority of what you just learned. So we're hoping that today being the last session, you're going to remember ours. Um, any rate. So one of the things that we're doing is we're building in good learning science into what we're doing as how we're teaching the and prompting using the LLM to create a more engaged experience. So you've heard us talk about Socrates. You've heard a little bit about it on the main stage. And effectively, the Socratic conversation is having a dialog with the student so that it's asking questions such that the student has to think and apply the information and answer the question.
If you think about any form of kind of conversation you've had, whether you're debating whether you're explaining to somebody, you're having to organize the thoughts and think about it and make make it relevant and to the conversation. And that's much more memorable to you, so that at a later point you'll be able to recall and and apply it in the scenario. So we're using some of that technique in what we're doing. And then our third issue that we're trying to address is that a lot of times we think about things almost like we would teach children. And when you're teaching children, the pedagogy basically says they don't know anything. So we're going to teach them everything, even if they don't have an immediate application for it. So if you think about, you know, square root, I don't remember how to do a square root, but we learned it way back when. Adults are different, we are more motivated to learn something when we have a problem or when it's addressing a curiosity or something. But we're not terribly patient to learn information just in case.
So we're thinking about the motivation and how do we make it important to you and relevant to you. And so one of the techniques is to actually make it relevant by tailoring the scenarios and the examples and the discussion to what you already know. So if you think about it, sometimes I hear people and they go through our training classes and they're like, well, I kind of get it. But it would have made more sense if it could have been applied to healthcare or the industry that they came from. So when we do e-learning, often we're we're creating these big, uh, missions that try and just address the widest audience possible. So you get a generic scenario that may or may not resonate with you. We're putting this much more personalized so that it does become something that when you go to remember it, it's like, oh yeah, that's right. When I thought about a claims case, now I understand exactly what I was talking about. So we're using these three areas to try and drive better retention and in capability in there.
So what Socrates is, is if you haven't come by the booth, there's still a little bit of time after the conference we'll be there. But we're basically having a conversation with the students. And this is effectively on any of the Pega Academy missions. We've started with the system Architect mission, where we are driving a conversation, rather than this passive engagement and guiding the students, giving them nudges when they need help. So as you're going through, it's going to give you a tailored scenario to ask you and help you discover the answers through various learning techniques that help you through the process. So we will give some guidance to the students if you get stuck. Because basically we recognize that everybody has things that they're really strong on and things that they're weak on. And so we want you to get through the things that you already know rapidly and just focus on the areas where you're weak. So when you've got that weakness, fine.
We'll give you some assistance. We'll give you a little video, we'll give you some other multimedia or whatnot to try and get you through that, and then get you right back into the conversation. I think the other big thing that we're doing here is we are looking away from multiple choice questions. You may have heard that this morning, but when you go through a multiple choice question that's, you know, it's a form of assessment. But I think most of us would agree it's not our preference and it's not the best way to really assess our knowledge. So this is much more realistic in terms of how you would normally be assessing something. You're listening to, the conversation you're seeing, how thorough was that comment, how accurate was it, and so forth. And that's what the LLM is allowing us to do, is to really do the evaluation of those comments and measure it against proof points and learning objectives that we've created in the missions, so that now it's much more natural state. You're not feeling the time pressure.
You're not feeling this. Well, I know it's one of four answers I'm just going to guess. And we're also driving you to make sure that you understand it, not just let you pass through and get a passing grade or a you know, what we would put as the cut rate on a certification exam. So with that, let me switch into our demo. This is just so you know, a little bit of a recorded session. I'm going to ask Marco if he wouldn't mind help driving because it sort of jumps ahead on me. Um, so this is real. We can show it to you the real live one. But just for the sake of this session, we've done a little recording to make it go as simply as possible.
So when you get into the learning, um, Pega Academy, you will access it the same way you would normally access any of our other missions. But you're going to get this nice big, wide space to have a conversation just like you would in other sort of chat type scenarios. And when it starts, it's going to go through the preferences. So you're going to set your language. Um, we have 1 to 10 languages right now. We're going to leave it in English just for the broad audience. You can set the conversation tone so casual will be, you know, if you're brand new and you don't want to hear terms and things, you're not understanding. We'll do that. We'll expert will be a little bit more technical.
And then you can pick your industry and toggle on audio. So if you want to have a fully um driven conversation with it, it will speak, do text to speech back to you in language. And then as well you can use the microphone on the bottom to do speech to text. So you could have a fully voice interaction. Or you can go in the text format. So as you can see we had a little challenge with our slides. The learning objectives are really how we're evaluating the student, and they are following pretty prescriptive learning principles in things like Bloom's Taxonomy. So we're thinking about lower level and higher level thinking. And how are we evaluating against this.
So in this example we started with the person already knows something. So we're jumping right into a scenario that's relevant to the industry that they picked and giving them a series of questions. So in this case it's the data model. So we've asked them some things, gives an initial answer about what we what we think it's asking us for. It's going to ask a few follow up questions just to drill into it. So this is probably the longest part. And as you can see on the right, the learning objectives now has checked off the ones that you've adequately answered. And it's just going to focus on the two that are left to be answered. So you can see that this is going to rapidly push you through the program and not focus on the content you you already know, but really just give you that opportunity to focus on what you don't know.
And when you complete everything, then you can continue the mission. And let me just show you the other demo. That's an example of when you don't know something. So in this case you're going to have the same preferences and so forth that you can choose from the learning objectives. In this case we used a simpler example, the Platform fundamentals, which is part of the system architect mission. And we're going to say I don't know anything about this. So what it's going to do in that case is give you some more information to get you started. It'll give you some videos. And if we have longer videos, it'll zoom you right to the point in the video that's relevant to this part of the conversation, might give you some images or some other material to try and ground you on it, and it's going to keep going through this a little bit just to get you set on the concept.
So it's a little bit more leaning to our traditional format, but then it's going to quickly try and get you to the questions where it's going to prompt you for some further information. So I think we've gotten to our first question here. Oh no. It's asking do you do you need any clarification on what we just covered. And it'll go through a few bits on this to get you through. But then you're going to start answering those questions. And again similarly it you can ask it questions. So you can say, you know I'm not too clear on this. Could you give me an image or could you, can you give me some other additional help.
Or you can actually, um, you know, use more analogies or your own saying, well, is that something like, you know, this in such and such a situation? So you don't have to be as prescriptive and definitive, like I'm giving you an absolute definition you could use like a conversation how you might say, is that what you mean? And it understands all that. So when you launch it, it's just coming off of the main screen. And that's just I think we, we looped. So I'm going to keep us going here. So just sort of to wrap this piece up, what we've really done is looked at the flow of this, letting people set their preferences. So it's much more relevant and targeted to you providing an industry scenario opportunity. And these are things that we can continue to enrich.
But to start, this is where we're at. And then it's starting to assess you against these learning objectives. So no more quizzes within the modules and so forth. It's all purely based on your conversation. When you need enrichment to help you through places where you're weak, it will give it to you. It'll give you constructive feedback. So if you've got areas where you maybe got part of the answer, it will help you and say, well, what about this? This, you want to think about this as well, and then at the end give you the full on assessment and allow you to then keep proceeding. So right now this is available on our System Architect mission.
It is equivalent to both the traditional format. And it also will prepare you for the certification. So we feel that we are in a kind of a groundbreaking opportunity to really change how you are going to go experience learning with Pega. So I'd like to hand it over to Marco and have him go through a little bit about what's behind this. Sure. Thank you. Love the demo. Okay, so that was the front end. And what I would like to share with you is basically the back end, the technology that's driving it.
Since we're a technology company. Um, this is the GenAI technology is very heavily used within Pega, as you know, both in the product but also in enablement. So we also do additional experiments, especially in our group, because we have the freedom to do a lot of experiments. Um, so let me first go to, uh, the prompting side of the house. I'm not sure how many of you are familiar with it. Like, you know, GenAI basically are large language models and the way you engage with them is you create a prompt. I assume most of you know, like ChatGPT, but when you engage with ChatGPT, that's also an end user application. So you would type in something and then it will generate an answer. What they actually do in the back end as well, like everybody else is you build a full prompt.
So you give the prompt and then the ML Large language model will then generate the answer for you in whatever format you want. In this case, it's text, right? The other thing that's pretty cool is that they're very good in role playing. So as you can see here. So basically you can give the model. This is kind of a generic way of doing it. You give it a role and a goal and a backstory. So basically you create a persona and you can create multiple personas and then you can actually have. That's really fun.
Have multiple personas, you know, talk to each other and play with each other. But that's not what we're doing here. So I'm just showing you now the basics, so you kind of understand what needs to happen, like what we need to prepare. So we need to prepare the prompt. And then we call the Large language model to come up with an answer. Okay. Hopefully that's that's clear because that's the basics. Then what we build it on. Here we go.
Then basically what we build is like a Layer Cake. So we build different layers, right? It's not just a single prompt that you call and everything magically happens. There is a lot more to it. So first of all, you need language processors. So it's not just one LLM we actually use multiple LMS. There are LMS from different vendors, as you might know, and also in different sizes. So the trick is to use the right LLM for the right task. Um, I will go deeper into each layer later on.
So let me just go into the next layer. So the language models themselves have no knowledge. That might sound a bit weird because you can go to ChatGPT. GPT, you set the temperature very high, which means it starts making up things also known as hallucination. And actually people love playing around with it. I think that's where most of the popularity came from. You ask it and it just starts generating stuff that doesn't exist for poems. That's fine for real business use cases, that's not really what you want. So the trick is to feed it with the right knowledge and also to make sure it remembers previous conversations.
So that's our second layer. That's what we built in. The third layer is then the reasoning and action. Because it's not one engagement, the bigger models are able to reason over what's really happening. So when you start a conversation with with Socrates, it actually starts building up a plan, a learning plan in the back end, using its reasoning capabilities to decide what's the next thing to do in your engagement. And as you've seen, there are certain parameters that that are influencing that. And that's actually a whole additional layer. That's the personalization layer. So in the personalization layer we have the option to personalize the engagement even further.
And I will explain more about that later. So this is the main thing you need to remember this. Layer one, layer two, layer three layer four. And then when you understand those layers better, depending of course, what your background is, you will also understand the capability and what else we can do in the future with Socratic learning. I think that's why it's important to understand this technology better. So layer one, I call it language processors. Um, I guess we're all familiar with the CPUs and GPUs, and I think the future is that we have Lpus. You already see it. Microsoft is also putting dedicated chips in the hardware.
And basically they're running these large language models. And it's also coming in your mobile phone, by the way, very soon. Um, so there are different vendors. And as was already announced, we will also be supporting all the other vendors in our official product. We've been playing around with these models for a long time, and my favorite one is Cloud three. It's one of the best models out there, and one of the fastest one is the Gemini one five flash. Um, so basically every model is trained on different data and has different capabilities. For us, the trick is, is to find the right model for the right task. So you can see there's a lot of variation.
As you might know, if you follow this market, there are new models coming out almost every week. And these are just the main vendors. Uh, there is a website called Huggingface where we have like hundreds of models. You can build your own model, you can do your own training, etc. so the possibilities are endless. And this is just the beginning. So we set up a framework so we can keep extending it further. I have two drill down slides per each level. The first one, this is what I want to share, is that everybody still believes that OpenAI is the best out there.
They're not. They're expensive, they're slow, and they don't even have the best model. So, you know, I'm not saying I don't like it. I just want to be honest. Um, so there are different ways. There are different vendors out there. I put them all there. The top graph basically shows the reasoning capabilities. So basically they're all on par.
It doesn't matter if you go with Amazon, Google or Microsoft. The models are equally performant. The the one I like most is actually the table at the bottom. Um, that's from, um, the chat arena. I'm not sure if you guys are familiar with it. It's lmc's. What's pretty cool is you can go there and ask a question, and automatically two models will give you the answer, but you do not know which models are being used, and you've been asked to select which answer you like best. And based upon that, they have the ELO score, which is the third column. And it's like playing chess to each other.
So the winner gets all the points and the loser loses points. And if you're high. Player, you lose more points. And if you're a low player, you gain more points. And as you can see, all three of them are completely on par at the moment. The other thing I want to share is there are different sizes of models. So the Cloud three family has different sizes. There's the haiku model which is the smallest model, which is very fast. I deliberately put the table on the left because the free version of ChatGPT is GPT 3.5, and like I said, it's it's slower, etc.
and the Cloud three is the cheapest one, and it's super fast and it's actually one of the best models that is that's currently out there. And we use that for simple tasks like classification tasks and scoring and all that kind of things. The opus model, which is bigger in size, which means it has more connections basically, so it takes longer to process, but that's because you do more calculations. But it's also better capable of doing reasoning type of tasks. So this is a little bit on the modeling. Maybe it goes too deep. Hopefully not. But I just want to, you know, share this information with you. So the second one is knowledge and memory.
So these models are trained. And you might have heard like I have a knowledge cut up of whatever August 2020 or whatever the cutoff is at the end. It doesn't matter what the cutoff is, because in an enterprise it doesn't work like that. Like every day new content is created. We create, develop, publish content. Every day. Multiple documents are being published. Those documents are now also fed into the system. So if something is broken or something is new, we ingest that knowledge.
And from that point on, the model uses that knowledge. So the models we use don't use their own knowledge because actually they don't have any knowledge. The only thing they have is the capability of doing a certain task, like generating a summary, generating a title. You know, those kind of tasks. And the second part is the memory. Because models are stateless, they don't have any memory. Some people believe it is the case, but it's not. So every time you ask a question to the model, it has no memory of whatever happened before. Lots of people are afraid of it.
Like when I send my data to Microsoft, will it will it remember it? No, because the models can't do it right. Models have no memory, so there's no reason to be afraid of that. The memory is in the system that you build around it. So we do the memory part on our sites. Going next. Yeah. So how does the work the knowledge work. So it's a it's a design pattern called Rag retrieval oriented augmented generation.
I don't want to go too deep, but basically it means you take your whole library like we take all Pega Academy or Pega dogs. We chop it into smaller chunks, and those chunks are then stored in a vector database. And why we do that is because we store the meaning of the chunks. So when you ask a question about, I don't know, Voice AI just calling out something, we go and search the database for all chunks that contain the word Voice AI. And then we build up a context in the prompt. Right? What I was showing earlier. And that's what we give to the model. So just in time we give the model all the information it needs about the topic, and then it will do its magic for us.
So up to a year ago, the context size was pretty small, like eight K 32 K. With the new models it's at least 125, and even the Gemini model is 1 million tokens. So basically if we want, we're not doing it. We can take the whole pack of dogs website that we have and load it in the model, and then the model will do its thing. The main reason why Google has it is because now it allows you to take a one hour video, load it in the model, and you can do all kinds of fancy tricks with video. We're not doing that right now, but it's maybe something we we can do in the future. So that's the the knowledge ingestion and retrieval side. And then the memory side of the house is when you engage like, you know, Socrates asks a question, you give an answer. But then what it does is it combines the information so it looks at the chat history or even, you know, going further back in memory.
There is a smaller model that combines the information. And then we have a couple of steps where we enrich the prompt further with additional knowledge. There's a reasoning step and then it will end up with a certain output. In this case, you just see two LMS two models. In reality, we have more than 25 different prompts. So you can see the complexity is already way more or it's more sophisticated than what I'm showing here. But it's just to give you an example. Okay. Then we have the third layer.
So we have the the models. We have the knowledge and now the reasoning and action. And this is basically where the genetic part is coming from. What we have discovered like we as in the whole world in the last whatever two years is that we learned that the prompting is key. It's called prompt engineering. And there are certain techniques that we discovered like at universities. And they they're also trying to publish that information. So I can read it and we can all apply it. And one of them is chain of thought and the other one is the react.
I will say a little bit more on that on the next slide. And in addition, we can give them tools, as in, we can make the model aware that there are certain tools that they can use. So when they're aware that the tools exist, then they can say, okay, my next step would be please use this tool which which can be a calculation which can be lookup information on the internet. It could be anything else. And then it will use that information in the next step in the reasoning process. So let's drill down onto those two as well. So the chain of thought is what you see on the on the top left. So basically you have the uh the language model. You ask it a question.
But before it answers you will say think about what you're going to say. So basically the prompt is think step by step. And instead of doing what they call system one thinking when I say two plus two, everybody immediately says four. But when it's more complicated, You need to do more thinking. Which is system two thinking. And this, um, this did miracles, basically. So instead of asking the question like, what's the capital of France? Or. Well, actually that's that question is way too simple.
But if it's a more complex question, it will break the question down into smaller steps. It will try to figure out the answer to each step. And then I think, okay, now I have enough information. And then it formulates the answer and that magically happens. That's what we all discovered. And that's pretty cool. And the next one is the observation. So basically you can have the the language model. It can look at what's going on in its surroundings, like a student is responding, or we need to make an API call to another system.
We need to look up information. Um, the model will get feedback from the environment and based upon that, it will decide you know what to do next. Then that's the react model. So basically it's kind of the cycle you see on the right hand side. It can actually cycle through that like multiple times, like 4 or 5, six, seven times until it decides, you know, what it wants to give back to the student. And it goes super fast, so you don't even notice how often it's actually going through the cycle. The tooling. So this is to put the tooling immediately in context. So the student comes in Socrates is assessing what it needs to do.
It will grab the necessary knowledge that it needs. It will check the planning that it came up with. Um, and then it will its capability to use certain tools like language models, have no capability of doing any calculations at all. So the only way you can do it, you can tell the model, if you require a calculation like the square root of whatever number, don't try to guess it because that's what it will do. It will guess it. Of course that calculation is not in our documentation. So what it will do is say, okay, now I need a calculator to help me out, to give the right answer. And that's, that's that's very cool. In, in in a way to do that.
It's same with translator. So you can have a language model doing the translation. But you can also use any other translator like machine translation services that are already out there like Google Translate, Amazon Translate or any other translation services. And then the last layer is where we apply the personalization. And this also maps more and more to what you've seen in the user interface. Right. So of course the first personalization is on Pega. Already we have content specific for roles right. There's like the System Architect track, the BA track, the data scientist track, all these things.
That means, you know, in different roles you use different terminology, different lingo. So basically we tell Socrates when it's a business person, you need to use this lingo. And when it's a technical person, you use different words in order to make it more personal. Um, and then we have the additional guidelines and that's like you select the language, you know, as you've seen in the example, uh, the tone of voice you want to use, you know, those kind of things. So here are some of the examples. The personalized learning in terms of, uh, the student level of experience right now. Basically we asked the student, what's your level? Honestly, actually, we already know it because we build our profiles. So we in the next step we will make it more sophisticated and we can actually have more granular levels than just, you know, beginner or expert.
We can really monitor where you are. And we can basically you can continue your learning journey where you left off. And I think that's that's where we, uh, where we should be going. Um, so that's the different roles. And then there are the learning objectives that we give the module. And then this is the last one where basically you select a language and then linking it back to the prompting. So just as an example, if you select a language like Dutch, we will add a instruction to the prompt where we say communicate with the student in this particular language. And from that point on everything just happens in that language. And this is just an example.
And you can give of course multiple instructions to these models. Okay. So to wrap up, if you uh, it's one thing you need to take away. And this is kind of what you need to take away. We have built a layer cake with different layers with different capabilities. And it's just the beginning basically of what we have just released. And we have eight minutes left for questions. If you have questions, please feel free to go to the mix because then it's properly recorded as well. And if you would like the QR code, we'll take you right to the Socratic mission so you can go ahead and give it a try.
Hello. Thank you for the excellent presentation. One question you mentioned that none of the language models actually save any information. So right now you can have conversational, uh, you know, messages. How do they do that? Um, yeah. So how you do that is you use a framework. And one of the popular open source framework is Lang Chain, for example. That's not the one we're using, but I'm just calling it out because everybody knows it, like OpenAI.
Um, that basically means that every time you make a call, you build up a memory. So you say, like, you know, the human said this, the system said that, the human said this, the system said that. That's called the chat. And you can give the whole chat to the language model every time you have the conversation. So the model is stateless and you need to take care of the state. And every time you do an API call, you bring the whole state in and out. So that's where that having that large amount of input data really helps have that conversation because. Absolutely. Okay.
Thank you. Yeah. I echo that great presentation about GenAI. Um, with this new learning model, how quickly I can build a team of developers versus from the previous generation of like, because we are investing a lot in Pega. So we want to have the capability to people to get most out of our systems. But just generally, if it takes two weeks before or now, it's one week or like what's the comparison? Yeah, that's a great question. And I think it will a little bit dependent on the individual and where their existing starting point of knowledge is. But we think if you've got a level of experience, you could probably cut it down to about two days from eight days.
So the goal is to to rapidly do it, but granted, it's going to be very different for each individual. Okay. Thank you. Thanks. Hello. Thank you so much for the presentation. Two questions I have. One is is this going to impact for the certification program. So answer the first question.
It will prepare you adequately for the certification. But certification will stay as it's currently formatted. So you'll go to the third party and and take the multiple choice. Okay. Great. Then second question is about the preference of the chatting. So there is a professional to casual right. So what is the difference between those. Yeah.
So we recognize that there's there's been a sort of an expression like talk to me like I'm a five year old or something like that. And they, they have this sense of can you, can you put it down in simpler terms. So casual will give you I'm talking to you at a, you know, a lower grade level so that anybody could understand it. I'm not going to use necessarily all the terms that you might start to be familiarized with. Once you get into Pega, and then the expert and professionals start to sort of ratchet you up to that next higher level, that would get more technical. And, you know, an expert also sort of starts to round out your profile of saying, okay, I think I'm an expert, so fine, it's going to sort of expect that level of conversation back from you. Okay. Thank you so much. Um.
Are we good? No questions. Now is the moment. Actually, we will still be around. Yeah. Doesn't matter. Yeah. Hi. Thanks for the demo.
Uh, so I see that it is scanning only the Pega docs, and, uh, that is controlled by Pega. But out there in internet. There are so many people writing articles about Pega. And they're having their own blogs. So any plan to extend to the outside of internet and scanning those content and also providing the suggestions from there. So at the present, you know, this is day two of it being live. So we went with what we know and what we can control. And you know, from our standpoint, we know how accurate. And we've got immediate access to the writers because they report to both of us to ensure that it's there.
I think over time, we can certainly look at other ways that we enrich the conversation. One of the things we're also doing with our docs is working with the field teams, getting best practices so that we're enriching it more from just the how tos, but also the best practices and so forth. So I think something that we will continue to to look at, how do we keep rounding it out. So you're not planning to limit just to PTEN or Pega docs you. We're not open it to the internet. We're not necessarily limiting it, but we're not necessarily looking at it yet. We're sort of seeing how it's performing. And so this is our big opportunity to really get your feedback and experience with it. Okay.
And, uh, I did not see any thumbs up or thumbs down to give the feedback for the responses. So actually there. Is is it there. You can give on the bottom left. You can give feedback. Yeah. In the lower left there's contact us. And then as well you can do the thumbs up. And it'll actually prompt you with a box and say you know, any comments that you would like to share.
So if there's things that you would like to see or that you would expect it in. The general feedback, but the feedback to the system so that it can improve the responses, uh, in the next time. Yeah. So all those answers, all the interactions are being dynamically created through the LLM. So what we would have to do is look and say, okay, if there was something that seemed a little unclear, go back into our the documentation as the source to say, where is that gap? Or adjust the prompt. If the prompt didn't clearly, you know, drive the conversation the right way. So we'll look between sort of the learning objectives, the documentation and so forth to try and ensure that, you know, anything that was maybe a little bit off. Yeah, I know you said it's day two, but do you have any plans in the roadmap to extend it to scan the application level documentation at the enterprise level and come up with similar kind of conversational responses?
Yeah, for the moment actually, it's been interesting. We've we've had lots of conversations at the booth about people asking different ways. That gets it even more tailored to their circumstances. I think right now we're going to get it going with the content. But it's absolutely, you know, it came up multiple times. And so we will definitely be thinking about those things in fact next week. Thank you. Thank you. Yeah.
Good question here. Um, as you were talking about rounding out the content and getting get more content out there if I took some of the courses and things now. Do you envision sort of tracking like, hey, you took this before and you sort of got this level of expertise, but you might want to come back because we got even more content to make it even richer for you. Do you expect to track sort of that sort of thing? Yeah. So it is now tracking your progress against these objectives in both the traditional format and in this new format. So we definitely see and if I'm hopefully I'm answering your question the right way, um, as you continue on with Pega and you have there's new information and so forth. Um, both it will know what you've, where you've left off and so forth, but it's going to probably keep assessing you during the conversation just to say, okay, you're you're really good on the UI or you're really good on these areas and, and really just drill into the areas where you're showing that you don't understand it. Um, the other thing is that because it's pointing to our documentation, typically when we wrote a formal eLearning, we had to really, you know, it was very Static.
We wrote it, we published it, and it sat there for a while other than bug fixes. Now with documentation, we can make that, you know, we can make updates into the documentation real time and feeds that right in. And it will now automatically reflect into the conversation. Okay. Okay. One more question. Yes. Um, so looking at the working example right now, it's very much in a, um, kind of like chat, um, response kind of setup. I was wondering if there's any plans on integrating it a little bit more with visual aids and kind of like a visual way of learning, and if it's going to be pulling like documentation to show you diagrams or anything like that.
Yeah, absolutely. So right now there is a initial set of material that it does recognize. So it's pointing to articles in the documentation or chapters as well as videos and things that we've already curated. I think over time, as the models improve and as we refine the prompts, it will get a little bit broader and open ended so that it can find more visuals, but you can actually go through and say, hey, you know, could you show me a visual? And if it has a visual, it will surface it to you. Thank you. I think we're asking. Do we need to? One more question.
Let's do one more. Thank you. Last question. Sorry. You mentioned that for the solution architect track. What's the plan for the business architect? So that's been our conversation for the last two days. So we we're looking to expand across all the foundation roles, um, as our next step and then kind of go up the to the higher level roles as the second step. Do you have a rough timeline for that or not yet?
Okay. But stay tuned. Thank you. Okay. Well thank you all. Thank you all.
Related Resource
Product
App design, revolutionizedOptimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.