Skip to main content

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 43:03

PegaWorld 2025: Powering Transformation: How AWS and Pega Enable Businesses to Extract Real Value by deploying Generative AI Responsibly

AWS and Pega's partnership helps organizations leverage generative AI responsibly, balancing innovation with risk management and compliance. Learn how our combined technologies enhance customer experiences and operations while maintaining ethical standards and transparency. Join us to explore practical use cases demonstrating AI's transformative potential.

PegaWorld 2025: Powering Transformation – How AWS and Pega Enable Businesses

Good morning everybody. Thank you for attending this session. I think you're going to find it unbelievable, informative. A real quick show of hands. How many people in here are either an AWS customer or partner?

Just show of hands. Pretty amazing right? And so if you think about our transformational knowledge that we've done here. My name is John McKinnon, by the way. I run the AWS relationship at Pega, and I was at formally eight and a half years at AWS before coming to Pega.

And I think this is the interesting thing that you see here is our most transformational announcement that many of you guys have utilized yourselves as Blueprint? Blueprint is 100% built on AWS, as is Pega as a service. Why does this matter? Because the speakers you're going to have today between Anna and Daniel helped us take advantage of all the services, the agentic AI, things that would then put us in this position today. Why does this also matter?

Because then as you are developing services on Blueprint, we're going to put them together, partnered, powered on marketplace. And when we do, there's advantages for the AWS rep. There's advantages for the AWS customer burns down their things. So as you look at the transformation of how we did this and how you can leverage this, think of that not only as a partnership of convenience. This is when two organizations that have adaptive change written all over them get together and put this together in a way that customers can leverage.

But more importantly, that partners can design solutions end to end that will distinguish them and their customers. So we really do appreciate both of you guys being here at PegaWorld and also being part of the AWS ecosystem. And I'm going to turn it over to Anna and Dan. Thank you. Thanks, Sean.

Thank you for that warm intro. Awesome. So welcome everybody. Just a brief overview of the agenda that we'll be covering today. So we're going to look at some of the innovation opportunities that you should be considering in your industries.

And as you look at generative AI we'll also discuss how data is still foundational as it always has been. And then we'll dive into not all the ways, most of the ways that we're using generative AI together. And we're partnering together with with Pega And the. Finally, we'll look at what's next and some suggestions for you to take back, either as leaders, as you look to organize teams and work around Generative AI, or as implementers or builders, how you should go about building. So just really quickly, though, we've really had like a three year time span and we're in year three now.

And there's been kind of each of these years have been kind of marked by a set of kind of goals or things that you were getting done in 2023. It was mostly POCs. You're asking fundamental questions like, what is generative AI? Is it secure? Whereas when we got into 2024, you were starting to, you know, organize teams, starting to build out those capabilities.

Whereas you might have started more with like a, a, a chatbot, I think in 2023, which was like the primary use case in 2024, you start to look at, okay, how do I take this to production? How do I prioritize my use cases, identify what the ROI is going to be, how, and then start to build out. And by the end of 2024, a lot of us that were building on GenAI had some fully fledged, robust production solution. And now in 2025, executives are saying, okay, you have put this in production. Where's the business value to my to the company's bottom line, to our our stockholders with the solutions that you have built or our customers.

And and that's what we're seeing in 2025, is that we're actually extracting real business value through these generative AI use cases. And so that's kind of where we are today. And then for now, I'll talk about some of those ways to track real business value. I'll hand it over to Anubhav. Cool.

Thanks, Daniel. So before I begin, actually, I want to thank John and the team for having us on stage. I have been working with multiple customers over my past six years at Amazon, and I can confidently say that Pega is doing some unique and impressive things. So thank you so much for having us. Now coming to the generative AI opportunities.

Uh, as Daniel mentioned, some of the initial use cases of GenAI when our customers started to implement were more about chat chatbot kind of interfaces. Right? And that's where the most of the initial innovation happened. Just simple chatbot interfaces. But what we have seen over the last two years, three years is kind of truly outstanding.

So what we are seeing is we are looking at a lot of coding assistants, a lot of models that can generate images that can really improve the overall productivity of your customers or your organization. These days we are looking at, uh, large language models, which can take your data and produce insights out of the data. Right? That's something which we're seeing a lot in terms of productivity increase. But more importantly for me is the creativity aspect of a large language or generative AI, right?

And this is where I think something like Blueprint really fits into the big picture, right? The way Blueprint is able to take your information, your documentation, your requirements, and able to create like an end to end application. It's something unique and fits very well into this whole spectrum of generative AI business value. Now the thing is that any use case you can think of today to implement in generative AI, you can. I can confidently say that any use case here will benefit in some way or the other by using some some sort of generative AI services, whether it's a compliance industry, whether it's like a financial service industry.

I won't necessarily go into the use cases one by one, but if you have a use case today, if you think you need some sort of enhancement productivity increase, you should definitely look at generative AI as as something that can help you in that fashion. One of the examples that we work with very closely was Nasdaq. So in this case Nasdaq was able to leverage AWS services. Again it's a regulated industry. Security is paramount.

But they were able to reduce the investigation time for their customers by 33% just by leveraging the generative AI and getting insights out of the data. So that's something which really helped them enhance the overall productivity of the investigation time. Now the key is that when you build a generative AI application, it's not the foundational model or the large language model that differentiates, right? I mean, we have God knows how many models these days from different vendors. What really matters is the data.

It's your data that really differentiates the your AI application. And that's why we feel like when you're building a generative AI application, it's just the tip of the iceberg. What you have to make sure is you build all those workflows, all those integrations into your application to build that unified view of data. Unless you don't do that, then your application. I can almost guarantee that your application will not be as much beneficial in your particular use case.

I mean, you might be able to ask some sort of random question. It will give you some sort of response, but that won't be a predictable response because you're not really using your data as part of your the generative AI application. And this kind of goes back to the announcement, which I think Elon did yesterday around fabric. The fact that Pega is now building mechanisms for you to actually consolidate your data into one centralized place. Now talking about data and how to how these foundational models can actually use the data.

Right. So there are different ways I'm going to just go a little bit below the below the hood in this case. Right. So typically there are three different ways a foundational model can leverage the data to produce insights out of it. Right.

The first and the most easiest way is what we call as retrieval augmented generation. Now in this case you have your data. You basically create some sort of vector database and you ingest the data into that vector database. Right. Now your model is now able to reference that vector database when it has to answer any questions or produce any sorts of insights.

This is typically the easiest and most fastest way of integrating your data with your large language or foundational model. But then there are more techniques like fine tuning as an example, where you take a data set and a large language model and you fine tune to your specific domain. And finally, you can keep doing continued pre-training. And what that basically means is you are just taking the model and you're just making sure that model always stay current to the to the existing data right now. Funny thing is that when I was looking at Alan's keynote yesterday and he was showing about the ChatGPT and the way it's not able to checkmate in two moves.

That's where I was kind of thinking that, hey, maybe if we can provide all the data to the model, it probably would have done a more predictable way of getting to the output. Right? And that's exactly, I think the Other Stockfish was trying to do, because they had the data about how chess is supposed to be played. They can tell that, okay, white is white, black is black. And they can now make sure you get a more predictable output, because they were able to kind of combine the data with the Generative AI model.

Right. One more thing to mention here is that in case of Pega, the reason why Blueprint is also able to create applications which are really specific to your use case is because they have also all the industry leading data, which they can combine with your data to, to produce that kind of output. Right. So I guess my point is that the power of data is really, really important when you're building generative applications. Issues.

So now when you have data, the the next most important question that you will ask, how about the security of the data. Right. Now the thing is, when you're building security consideration for Generative AI, there's actually a lot to consider. You need to make sure that the Generative AI application is built in a way that it's compliant. It has all the legal and privacy frameworks built into it.

You need to have all those controls, guardrails, make sure it's tested the way it's supposed to be tested. Right. You need to make sure the architecture is resilient itself. All those five nines, four nines, whatever SLAs. So all those have to still be considered while you build your generative AI applications.

Now, the good thing is that the services that we use at AWS and the services that Pega leverages to build their application are built on all this foundation. We as AWS, always make sure that we give all these foundations to our customers, to our partners like Pega, and they are able to build on top of it. And that really add that adds that value for their end customers eventually. So what service do we use? Right.

What service AWS have to offer all this functionality for our partners like Pega and the service that we launched a couple of years back was Amazon, bedrock, and Pega has been using this Amazon bedrock as to build their multiple generative AI features. Right. This this service lies in the middle of the stack of the our overall generative stack, which has other features which Daniel will cover in the later part of the presentation. But the idea is that using Amazon Bedrock, you have access to multiple foundational models and all those features that we can use to build our applications on top of it. So what are those features?

Right. Sorry. So, so the features that that this bedrock service will offer is the choice of foundational models. It has various options, various models available. You can choose from anthropic models Amazon models, meta models.

There are multiple models available on the bedrock marketplace which you can use to build your Generative AI story. There are features like model customization, which which basically means you can take your data, take a model, and customize according to your own domain. So that's again one of the feature that's available as I mentioned. It also have ability to use retrieval augmented generation or Rag. And this again allows you to use your data as part of your generative AI application.

We have all the security, privacy and safety features built into the service itself. And then there's also all those multi-agent capabilities which a lot of customers are using these days to solve multiple workflows. So all of those features are built as part of the service itself. So when our partners like Pega are leveraging bedrock, they have access to all these features, and then they're able to build reimagine all those workflows on top of it. Now, in terms of the security and privacy features, I won't go into the detail for each one of them.

But the idea is that if you're using bedrock, your data is unique to you. We do not use bedrock or we not. We do not use any customer data to train any of our AWS services. It's your data. It's unique to you.

All of the data is encrypted in rest and transit. We use, you know, the leading encryption mechanisms to make sure that your data is encrypted as, as, uh, as you leverage these data as part of this, this stack itself. If you fine tune models again, those models live inside your VPC and the data that you use to fine tune, fine tune those models is also remaining in your VPC. So that's again an important thing to consider. And finally, the Amazon Bedrock service is integrated with AWS IAM service.

And what that means is it has access to all those features, all those identity and access management features that are fundamental to build any secure application on AWS. And yes, it has all those compliance standards built into the service itself. Right. So so you have GDPR, HIPAA, pci SoC 123. Compliant already built into the service.

And finally, these are the choice of models that you get as part of Amazon Bedrock. As you can see you have choice from A1 to labs Amazon have their own model family called as Amazon Nova, which you can leverage. They are one of the leading models in terms of price performance. But then because of our strong partnership with anthropic, we also have all the anthropic models available, usually available on the day they are released. Recently, anthropic released sonnet for 4.0. And that was available on bedrock that particular day. And then we also have open source model from from providers like meta as an example. And not only this, there are other models available like Deep Sea and more models through the Amazon Bedrock marketplace. So this choice of models basically helps our customers and our partners like Pega, to use them as per their needs and as per their use case. So having said that, I'll now hand it over to Daniel, who will talk about how to use some of these models in more responsible way and how Pega is working with us to do that.

Great. Thanks. All right. So I'm going to cover how AWS thinks about responsible AI. And we'll we kind of think about it in these eight dimensions.

So one of them is fairness explainability controllability safety privacy and security. Your governance your transparency and then the veracity and robustness of those outputs. Um, let's let's cover a few of these in detail. So one of them is fairness. So the fairness is not one metric, right?

There's for example, we have a service called SageMaker, which is at the lower level of that Generative AI stack. And it has SageMaker clarify, which has 21 metrics for fairness. And so don't think of fairness as like a boolean or yes or no. It is a combination of factors. And they impact each other.

Right. So what might be fairness is going to be unique to your use case. It's going to be unique to the personas that are using it. And so as you look at fairness within your own organizations, you really need to have those personas. And what is the purpose of this AI for you to understand which are the components of fairness is important.

Something else important is explainability and transparency. Now oftentimes these get confused whereas you can think of explainability more as the ability to understand how the AI model arrived to its decision, whereas transparency is more of, um, this was generated by AI, for example. So the EU. The EU AI act, which uh, was uh, kind of drafted and assigned in June of 2024 and comes into effect 24 months after that, requires that things generated by AI, such as images, have some level of watermark or digital signature that shows that it is AI generated. Um, and you have to say what copyrighted data did you use to train these foundational models, etc..

So those are examples of of transparency. Explainability is a more nuanced and complex topic that is an active area of research and is and is about, um, how what is the thought process of that? And there's a few ways that you can do this. You can actually have it tell you how it arrives to that suggestion in its output. But that's not always reliable, right.

We've seen sometimes where it's kind of giving you not the full answer of how that actually happened. So there's more complex topics around chain of thought, etc.. Um, the also keep in mind though, in explainability is not perfect in the real world either. So I like to to show the example of whenever you have young kids or when you were young and it said you have to be this tall to ride. Um, it never told you like, why?

Right. You just either got upset or you were like, I'm big enough. I can go ride this this, um, this roller coaster, but right, there could have been reasons in terms of it wasn't safe or they didn't want like the insurance or liability to use that. And sure, they could have built in those controls and they could have made different decisions that would have made it right. They could have it changed the fairness or certain aspects of that ride to to allow you to do that. But, you know, I wouldn't expect explainability to become perfect. Just like the real world isn't perfect in in this same space. Um, and then governance as well. So that governance is the one I'll talk upon. So we have bedrock has guardrails, for example, that get included with every single.

You set up these guardrails. And they get included within every single request to to bedrock. And you can also sign it up to do automated reasoning checks. So we have in the Financial Services. This is used extensively to provide some verifiability of the output within within that um, the so the European Parliament is using AI responsibly.

For example, they've made. Over 2 million documents It's available to their citizens using Amazon, bedrock and anthropic. The the. What I like about this story, though, is not only were they using AI in a responsible manner, they're in a transparent manner. They're actually using it to a transparency use case.

So this is increasing the transparency that they provide their own, um, constituents. The uh, the last piece I'll bring about this is that don't think of this as I need to determine this for my organization. Right. Some of these are like safety, right, are going to be built into the foundational model itself. Right.

So it's not going to it's going to avoid certain harms. Um, and then you can always build in additional safety considerations as well. But you have to think about responsible AI at the use case level, at the agent level or the specific model that you're choosing. And and so these aspects are going to be going to going to change depending upon that. And so the solution or the partner you choose needs to be able to address address these at a at a use case level.

And it all gets back to Alan's kind of point yesterday about the right AI. So again responsible AI, put your people first. Assess the risk at a use case level. So we have a whole session and um, and blog posts around how to do risk assessments for use cases. Um, but it's basically you look at likelihood and you look at the impact, um, and that can help drive your decision making on how you prioritize your use cases and iterate across that entire lifecycle of, of the, of the product.

So take a product mindset when you're when you're implementing these and be willing to go back to your original assumptions and make changes to them. Um, and then finally test, test, test again. So again using automated reasoning checks or um, setting up some automated regression testing, um, it allows you to have some verifiability that when you provide a, you know, if you're doing financial services and you're increasing your credit limit, for example, that you're, that it's always giving you the right answer as to whether or not you would expect to be able to to do those activities. Okay. So now we're going to talk about some of the ways that we're, um, collaborating together.

I've spoken about AWS transform mainframe yesterday. And then, um, can I just get a quick show of hands on? How many people either attended that? Okay, so, um, a good portion of it. Maybe a little bit less than half.

So I'll spend a little time on here. Um, and then. But Kerim also talked about it in his. In his talk. But so at the top of our Generative AI stack, we have services such as Amazon Q developer, Amazon Q business, and AWS transform.

So Q developers is like a coding assistant, and it's intended to accelerate the development of your, um. Of your code repositories. And then Q business is meant as a way to provide a chat assistant for your business users. And then finally, AWS transform, which we recently announced is intended to accelerate the migration of these workloads to the cloud. So 70% of workloads are still on premises and 70%.

I think this statistic is the one that, like, really shook me, was that 70% of the software in these fortune 500 companies was written 20 plus years ago. And it takes a long time, right? It takes a long time to do these migrations, and there are a lot of ROI, right? A lot of tech debt that you have to overcome, and a lot of inertia that you have to overcome to to do those transformations and migrations. And there's a lot of there's also a lot of challenges associated that would be kind of obvious based upon that previous slide.

But there's you have a lack of expertise in the AI space, or you might have a lack of expertise in those legacy systems. You might have to bring people out of retirement to get information around that. You oftentimes can't just justify hiring a whole new organization that is going to be separate from the one running your business to do migration to the cloud. Um, and the it just results in like an overall slow transformation speed. So AWS transform has a few different components.

One of them is AWS Transfer Mainframe. But we also have solutions for. Net and VMware and for AWS Transfer Agent for mainframe. This is the first AI service that was purpose built for mainframe transformation. It is intended it's built upon 19 years of experience here at AWS doing these migrations.

And it deploys a set of agents that handle these complex tasks such as decomposition, validation, etc. A Toyota motors used was on stage with us at Reinvent, which is a large conference to talk about their how they used this service. Um, and they were able to take the cut the time down by 75% that they were spending on the the assessment and documentation of their legacy systems and going, going from weeks or months down to days in that process. So the AWS transfer for mainframe like Outputs. Code at the end, but it also outputs in the middle of that a lot of great documentation.

And so we're partnering we've partnered with with Pega to have them leverage AWS transform where we're providing that documentation. And then that documentation is going into Pega Blueprint with additional data sources that you can provide and resulting in a Blueprint, which is then translated into the Pega Platform. I'll keep moving because again, we talked about this in Kerim did an excellent demo on that. The the other area that we are collaborating together on is Q Business Index. So I talked about Q business earlier.

Right. It is this kind of enterprise chat assistant. And that allows you to connect your data sources. But one of the big reasons why we created this was, uh, people spend more time in more different systems to complete the same tasks than they used to. Um, does anybody else?

Show of hands. Does anybody else feel that way? Like, compared to when you were five years, ten years ago? Like, you have to use more tools to do the same thing. Yeah.

Um, so up from 11. Just from, from 6 in 2019. So we've created a new business and new business index, which is a vector store. And so the way I like to think of a vector store, an index is you can think of when you go to a library, the index kind of tells you where that information is. It might contain some metadata and the index is somewhat similar.

Um, it's a representation of your data. So it's not like a data lake where you're just copying in full all of the data from your. Your source systems. It is a representation of that data that allows then large language models to access that information and provide some value, and tailor the AI to make it specific to your needs. We've got a lot of customers using Amazon Q business.

And again, the the the piece that is important here is that so Q business Index connects with your source systems. Right. If you again if you this whole process and the whole concept of connecting to your data turns generic AI into actually useful AI for your business. And so you have to have these set of data connectors. And so we've provided a set of 40 and growing data connectors to various systems that I'm sure a lot of you have.

And then that gets stored in your Q business index, which is a vector store, a representation of of your data. It allows you to get continuous updates to that, and then you can then grant access to external parties to have using through your identity provider. So if you want to provide access to Pega, for example, you can do that. And you can also decide what data sources should they have access to. And so there's fine grained security controls here that allow you to control both your internal users that they inherit as they log into to Pega, but also at the the service level, at the partner level, what data they should have access to.

So what does the what does the future hold for for generative AI. And we see a few different areas. The one is obviously agents. And this is so there's there's kind of like a we started out with these large language models and generic and they're useful. But again, we need to bring data and we need to bring the right data.

And then we also often need to provide certain prompts to them, uh, depending upon the use case. And so we started out with this really general kind of AI concept. And we're kind of starting to go back to that narrow AI, except we're just having a bunch of these narrow eyes that are going to work together to solve use cases. And so one of your agents might be an orchestrator agent that is then reaching out to a set of other agents that might just be calling APIs that you already have today. And so you can integrate these agents with your existing kind of system.

That's kind of another key piece, like generative AI is not going to be a replacement of your of your applications. It's going to be like an A layer, an enhancement to that. You can often just be plugged in on top of and it tell it how to interact with your existing systems. The another area that we see kind of growing in this space is multimodality. So this is the concept of I should be able to provide whatever type of input to this model and get a response.

So whether that's an image or it's some, some text or some code, etc.. Um, and then the other one is multiple models. So you saw in bedrock there's a large list of these models that are available. And you want to be able to choose the right model for the right for the right purpose. Um, and uh, yeah.

And then finally, as we talked about the EU AI act, they're the first I don't expect that they're the last that are going to start adopting these AI practices that put both on on customers, but also, um, providers and partners, a certain regulations around how they should be thinking about this. So AWS was we were the first to adopt an ISO standard called 4201. And then we're also working with on the UAE Act. We've joined various boards and to make sure that we're both building responsibility, but we can also help educate our customers. And that is actually one of my strategic recommendations.

So is education. And you don't have to get down to like the level of like what is a neural network and etc. but you should have the basics. If you're making decisions within your company about where to use this. Um, but go into things right, eyes wide open. And that is kind of you can't really go into that if you don't know enough to at least know what you don't know. And you need to know, like the, the basics. So, um, so yeah, so build build your company culture around continuous experimentation. Uh, and because that is a fantastic I would argue the best way to learn is to encourage experimentation within your organization and then prevent some early dependencies as you do that. So I like to say there's nothing more permanent than a temporary solution.

And so you build like prevent some of those early dependencies as you build out these use cases. The second is like flexibility is key. So, um, you know, like make sure that you have the ability to switch to a use case, um, that you're able to kind of have some flexibility there. And the other one is no one size fits all. So I like to bring this back to the use cases that we talked about earlier where look for you might have partners out there that have a generative AI solution that has already integrating with with Pega.

So one example is Tech Mahindra has a solution called PhDs, which is in a cloud native solution that's built on AWS and integrates with Pega. And it was it's built to for the aftermarket aftermarket automotive aftermarket area to handle their digital transformation and, um, into business process improvement. So, you know, find a partner that has a specific use case that might even be able to accelerate you faster than what you're able to do yourself. And then finally listen and enable. So listen to your domain experts listen to your customers gather feedback on, especially if you have something that's customer facing to your customer, that's Generative AI have a way to capture feedback on like, was this experience good or not for them?

And use that as an input to how you go and improve your your use cases? So with with that we'll open up for for questions. And I just want to say thanks to everybody for for attending. If you don't mind, can you go to the. So I have a two part question.

Okay. Um, the first one is a quality data security assurance. You did talk briefly about the data being secure. Is there any white paper that's available published by AWS so that when there is a customer who says, why are you putting my data into Blueprint, that is not in my instance, to say what is there is still secure, because some of those customers that we talk to are still iffy about data going in outside of their environment. So is there something that's available for us to reassure them, saying that, hey, this is still secure?

Don't worry about it. The second part of it was an extension to what you talked about. Amazon. Q right. So if we were building a Blueprint.

I mean, the more the data, it's more efficient and more effective. You did talk about the connectivity to that, but is that data connectivity today existing? And more importantly, how and what do we tell? Is there a connection from the existing Pega instance to going into the customer's AWS instance? Is that the way or how is it going to be planned?

I do have a third question, but that's going to be more for Pega to say that. Is there an instance of Blueprint that's available in the customer's VPC itself that can be done so that they can be even more assured of the fact that the data is still there? It is being run on my own instance, it's not going out. So I know it's a long question, but I wanted to make sure that because each of them are very interconnected when you're talking to a customer. Thank you.

Yeah, I can I can take the first one. So when we talk about data, there are multiple aspects to it, right. So first of all, we at AWS make sure that the customer data we are not really using for training our models, right? We don't even store it, right? We don't even store your data as you. As you. Yeah. I mean, I mean, obviously if the customer is trying to store the data in their VPC, we do not really use it to train. And then but coming back to your question about Blueprint, right. That's basically a shared responsibility model that we follow, right?

So it's the responsibility of the Blueprint team to make sure they follow the standards that we enforce or we provide to enforce to protect customer data. And there are different ways to do it. The right answer, how Blueprint is enforcing those is something that Blueprint team should be able to answer better, right? Yeah, I think. That's why I said it's important that we have something as a white paper that's jointly produced by Pega and AWS.

It's a reassuring factor. It's not that. It is not. I mean, most of the customers are sitting on AWS Pega, so there's no problem with that. But it's just a factor of reassurance that before even they start developing their data or their applications, what they're are using for Blueprint that data is still secure.

So that's a very common kind of discussion we have with the Pega team as they build their applications. So we provide them documentations and all the legal stuff so that they have access to everything from AWS perspective. And then they publish on top of it. Right. So so basically they take the documentation, they know what kind of tuning they have done to their requirements, and then they publish on top of it.

I'm not really aware if that documentation exists or where it is. It might be existing. Right. So that's something we can follow with the Blueprint team if you need any contacts. We very closely work with the Blueprint team there in the booth.

Go meet Sam Tremlett. He's the product manager for Blueprint. He should be able to answer that question to you. Sure. So basically typically the way it works is you upload your documents to to the service and then you'll have to be specific around what you're asking the service.

What what's your use case. Right. So typically what you'll do you create a prompt that, hey, this is my data. Can you recommend, based upon these these parameters, what should be like the value of this data? As an example.

Right. So so and then you can keep chatting and chatting and chatting with your agent. Um and it will keep recommending you. And if you think that at some point it's not going in the direction that you want to go, you can just course correct. You can just say, you know what?

This data is from a financial company. Act like you are a financial assistant or financial analyst and then start recommending me. So there are ways you can build your prompts, and that's where the whole prompt engineering field comes into play. Um, but the key is that you should be able to, you know, provide some sort of instructions. Otherwise your model will start hallucinating and start giving you information that may not be relevant for you.

Do you have data needs to be structured in a specific format or you. Now you can upload PDF word documents, Excel files. Actually, it's a really good job in general. The only differentiator would be like what's the is it a multimodal model for example? So if you have images and you need text and you need that a single model, then you need to make sure you're choosing a multimodal model that can take both of those.

That's a good point. So certain models can take only text as an input. Certain models can take text, videos and images as an input. So you can even upload an image to a model and say that. Can you extract this information for me?

In fact, that's one of the things which Pega is also doing. They have a use case for intelligent document processing, where they upload documents and you're able to get insights from the documents. That's a common feature which you can you can look for from generative AI. Okay. Well thanks everybody for your time.

Thank you so. Much. Thank you.

Related Resource

Product

App design, revolutionized

Optimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.

Share this page Share via X Share via LinkedIn Copying...