PegaWorld | 41:34
PegaWorld iNspire 2024: Powering Transformation: How AWS and Pega Enable Businesses to Extract Real Value by deploying Generative AI Responsibly
Generative AI is revolutionizing industries, but responsible deployment remains a challenge. Join us to learn about how AWS and Pega are partnering to help customers harness generative AI's transformative power while mitigating risks and ensuring ethical, regulatory, and governance compliance. Dive into practical use cases that illustrate leveraging generative AI and other AWS and Pega technologies to drive innovation, enhance customer experiences, and streamline operations while prioritizing transparency, fairness, and accountability.
Hi everyone, I am Dan Bell. I am the Product Partner Manager for AWS here at Pega. I'm responsible for the go to market relationship with AWS. With me are my friends die at the core Laxman Seethamraju and today we're going to be discussing transformation with AWS. So without further ado, I'm going to introduce these guys. Just some housekeeping. We have two microphones. They are not wireless. So if we have questions later, just walk up to the microphone when you selected for your question.
And outside of that, without further ado, my friends Laxman and Daya. Thanks, den. Thank you all for coming to this presentation and taking time to understand how we are innovating for our customers, along with Pega. So my name is Daya attacker, and I'm a senior partner solutions architect at AWS. And along with my colleague Lakshman Seethamraju, we are going to talk about how you can deploy generative AI responsibly using AWS and Pega and drive business value. So some of the topics that we are covering today are how you can. We will talk about how you can get started on your innovation journey using generative AI. We will touch upon some of the responsible AI dimensions that we use in our products and solutions. Then Lakshman will talk about the AWS generative AI stack and some of the co-innovation that we are driving with Pega.
So before we get started, let's watch a quick video. Well, that was exciting. And as you might have been going through multiple sessions in the conference here, the theme is generative AI. So how many of you are familiar with generative AI or have used generative AI? And how many of you are planning to use generative AI in your production workloads in near future? Well, that's pretty good. So as we embark on the journey to generative AI, like what we are seeing is, like many of our customers in 2023 were in experiment and discovery phase with generative AI. But 2024 and future is going to look completely different. So as per Gartner, almost 80% of the enterprises would have used generative AI APIs or deployed would have deployed a generative AI enabled applications in production by year 2026, and there is a good reason for that.
If you look at another research by Gartner, uh, they are they have evaluated that any generative AI supported human worker is 30% or more productive than unsupported worker. So think of all the possibilities and all the business value that can be unlocked by that extra 30% productivity. So if we take an example of a call center or a contact center, you can serve more customers, you can reduce wait times and you can improve overall customer satisfaction. And while talking to our customers in various industries, there are a couple of questions that always come up right. So they want to ask like, how can I get going with Generative AI? So the first thing we tell them is that data is your differentiator, right? So you don't want to have a generic GenAI capability and deploy that, right. You rather want to unlock the business value through the use of your data. So you need to have a solid data foundation to get started.
So you using your data, you can move from generic Generative AI to generative AI that understands your data, your business, and your customers. And while talking to our customers like couple of questions that come up. Another couple of questions that come up are like, how can I move quickly and how can I generate value from generative AI? So as I said in the previous slide, data is the key to maximizing potential of generative AI. Now the second point is like you need to identify the use case use cases as early as possible. And those use cases need to be relevant, viable and impactful for your business. You need to engage all your stakeholders and come up with a roadmap of use cases. Now lastly, you need to empower your entire workforce with generative AI regardless of their AI expertise. So one of the most exciting thing about generative AI is the ease of use.
And it's not just restricted to data scientists or machine learning engineers anymore. Rather, your operations, your sales, marketing, and developers. They can all use generative AI and create engaging experiences for your customers and reimagine your business. And if you attended the keynote this morning, uh, now Amazon Bedrock, which is a generative AI service from AWS, is available on Pega Platform. So what that means is As you are able to consume the foundation models available within Amazon bedrock into the Pega workflows and directly create automations that can maybe summarize your documents. You can generate a text that can be used in sending emails to your customers, and a lot of other use cases. Now, I want to shift gears and talk about a use case. Uh, and, uh, most of us are familiar with the mortgage processes. And so we'll take an example of a mortgage underwriting process.
And for the sake of simplicity, we'll just say that there are only four stages involved in this process. So to begin with, a customer comes in and they fill an application for the mortgage. And they provide all their details around like personal details, their financial history or details about the mortgage that they're trying to get. Once the application is filled, the application is sent to a loan processor, who in turn has to take all the documents and information that customer has submitted and process it to make sure that everything is verified. And they create a loan package, which is basically sent to Mortgage Underwriter, who will review these documents to make sure the customer is compliant with the lending criteria that the lender has. And once they review all these documents, they make a decision and communicate it to the customer. So this is a very high level process, right. And uh, but still like if you look at this, the two stages in the middle, they can take a lot of time. And the reason is, uh, there are a lot of manual steps involved.
For example, there could be multitude of documents that a customer need to submit, and then the loan processor has to manually go review each of those documents, uh, collect the data from those documents, input into different systems. Then even for underwriter, they have to if scrutinize each and every aspect of the application, they have to understand what the risk is and what is the risk involved in granting that mortgage. And in order to do that, they might have to read like ten page document in order to find a simple information in that document. Right now. There are some of the other challenges that make make it even more difficult. So the loan package can be incomplete. There can be data which might be missing when the customer is submitting the application. And as we move forward in the process, there could be instances where like there is additional documentation requirements and then you have to restart the entire process again, going through those manual steps. And the most time consuming or the bottleneck for this process is the stare and compare work that happens.
So the loan processor has to look at the document, then enter the data. And then even for the verification that Mortgage Underwriter does, they might have to read through all those documents. And all of these cause fewer loans to be processed and add cost for the lenders. Now, if we have to improve or reimagine this process, what what can we do here? Right. So the simple solution could be like automate data collection, but it's a loaded statement, right? How do you automate everything? How do we reduce the stare and compare work. Then other thing could be like can we empower the underwriter and loan processors by providing them productivity tools.
And the answer is yes, we can. We can do that. And we will show you how we can use some of the capabilities within Pega, as well as services from AWS that you can use to completely reimagine these stages in the process and turbocharge your loan processing here. And by the way, we are just this is an example here, but you can apply these concepts to any, any of the processes which involve, uh, the manual stages and steps here. So in this scenario let's talk about the first stage that the loan processor is involved in where they have to collect the data and documentation. So for ingestion of the data what we can do is we can create a very engaging experience for the customer where they are able to enter all the details required, and you can design it in an intuitive way so that you're using all the business rules defined in Pega to make sure nothing is missing. Once the data and document has been collected, you can run it through a classification step where you can use Amazon Textract OCR capabilities to extract data from the documents and then pass it on to a large language model available within Amazon Bedrock, which can do the classification of this document. Now, I'll maybe talk about how why this is important, right. And in a mortgage process, you might have like hundreds of different types of documents.
And when customers are attaching those documents you don't know like what is inside them, right. They might say it's a bank statement, but it could be a tax advice or something. So it becomes very important for the system to understand what data is available, which documents are available, and using classification, you can really identify which all documents have been submitted by the customers, and you can go back to the customers or throw an exception back to them saying that, okay, your certain documents are missing. Now, once the documents are classified, you can store the categorized documents into Pega as categorized attachments as well as in a knowledge base where they can be used for querying later. And in the next step, you can derive key insights from these documents by using Amazon Textract. So for example, if there is a document with a very complex table structure, you can use Amazon Textract capability to get that data out from the tables into your digital systems. And also for like using Amazon Bedrock's Generative AI capabilities. You can summarize these documents or you can normalize the data that you have collected. And for example, like some of the systems, some of the documents might have name as first name, last name, some other documents might have name.
And just like full name. So you can define all those things like and you can ask the Amazon Bedrock Generative AI models to normalize that data for you, so that you can use a standardized data model in your downstream processing. Amazon Textract also gives you confidence scores. So for example, if there is a very low quality scanned document that was uploaded where there was a handwriting and say, system is not able to confidently say that its name or address. It will give you the confidence score. So it might say, okay, it's a name, this is the name, but the confidence level is 50%. So in those scenarios you can for the low confidence extractions you can bring human humans to verify that information. Right. Using human in the loop capability.
And once that information or the issue with the information is fixed, you can move to the post-processing validation and review. We can also provide a conversational interface for the loan processor here. So in this case we are using Amazon Q which is a conversational agent that can help you to query the doc, the information in the knowledge bases or even in your enterprise data sources that might exist there. So using Amazon Q, a loan processor can simply ask questions like if they are verifying a certain information, they can ask question okay, what is the value of name or address? And it will give them the answer. And then they can validate what was extracted. And it will also show them the exact location of the information that that has been extracted. And they can instead of reading through ten page document, then they can just go directly to the text snippet from where the value was extracted. And of course, all of this is not possible without Pega's UI business rules, low code integration and orchestration capabilities.
Now moving on. Like what can we do for the underwriter? Right. So for underwriters you can like we can provide Amazon queue and Uh, conversational capabilities again. So instead of them going and reviewing each document, they can simply ask questions like they can ask for a summary of, uh, the the loan. Right. So loan application and Q can give them a very structured summary outlining each and important each and every important, uh, data that is relevant for making the decision. Now, you might be wondering, like, there are so many services we talked about and we asked like and integrated with Pega. Right.
So what what can we use in order to make this integration easy? So I'm happy to announce that we have some, uh, master key accelerator available from Softserve, which they built in collaboration with AWS and Pega, and it is today available in AWS as well as, uh, Pega marketplace for you to go and look at. And using this connector, you can basically integrate with more than 200 AWS services, right? Without the heavy lifting of writing all the authentication and authorization code that needs to be written. So I would encourage you to visit AWS and soft serve boots. And we have a demo of this accelerator. And we are showing how you can use it to integrate Pega with AWS AI services. Now Pega and AWS both are committed to bringing AI in a responsible fashion for the customers. So all the tools and solutions that we work with work on to bring to the customers.
We embed social responsible AI dimensions into it and responsible AI is still definition of responsible AI is still being debated, but at AWS we consider them made up of eight dimensions, as you can see here. So fairness, explainability, controllability, safety, privacy, transparency, veracity and governance. And as you all embark on your own AI journeys, I would encourage you to start thinking about how AI interacts with the stakeholders in your environment, how AI is using the data from your organization as well as your customers data, how AI is interacting and providing the results, and is there a way to monitor the responses from AI? So you have to think about all of these. And in the next part of the presentation, Lakshman is going to talk about how AWS is bringing generative AI for the customers and how they are, how we are basically embedding all these responsible AI dimensions in those products. So now I invite my colleague Lakshman to talk about AWS Generative AI stack. Thank you for that. Thank you for that. Hello everyone.
My name is Lakshman Seethamraju. I'm a senior technical account manager at AWS. As you know, AWS, we always are reinventing on behalf of our customers. So we've learned that to bring generative AI to your customers and employees, you need the right set of capabilities to build. It starts with cost, performance, and performant and cost effective infrastructure at AWS. As you can see, we have Trainium and Inferentia custom silicon chips, which are one of the best cost performance, uh, hardware available for you for training and inference. Turns. Next, we need a safe, private, and easy way to build scalable applications using foundational models. And that is where we'll talk a little bit about bedrock.
And the next one is this Generative AI applications. They can be enriched with some business context from about your business. And this is what Amazon does. And we'll talk a little bit about that. Now all this is built on AWS with our high bar. We build and basically are delivering a set of products which are easy to use, secure and enterprise ready. It's still very early in in the Generative AI business, and we are very excited to see to see what we can do together about it. Amazon. Bedrock is a fully managed service from AWS, which allows our customers to choose top performing foundational models from a wide variety of leading AI providers.
It also provides a broad set of capabilities for building secure and private applications. It allows you to customize your models in a private and secure way, using techniques like fine tuning or retrieval. Augmented generation. It also provides agents which can execute multi-task steps on your behalf, for example, to complete the booking process for a travel or to process a claim. And all this is done in a secure, private and safe way. Most of our customers are coming and telling us The most important feature in bedrock is how easy it is to experiment, select and basically combine multiple foundational models. We know that it's very early stages. In generative AI. And there are customers are moving very fast.
They are building applications. But at the same time, they also want the ability to pivot quickly and choose a different model to meet their business use cases. And this is the feature which they are loving most about Amazon bedrock. Now Amazon bedrock. Basically, we built it with security from day one ground up. As you know, in most AWS services like EC2, RDS and S3. It has enterprise grade security features. And we baked these features into bedrock. And you have the ability you can trust that any data you put into bedrock is protected.
And you can be sure about that. So one key thing is, like none of your customers data when you're doing fine tuning, is used to actually train the original model. We basically make a secure copy of the original foundation model, and then use your data to create to fine tune those models. This way you can be sure that your data is secure. Next, all the data is encrypted at rest and in transit. All the API calls are within your VPC and region so you can meet your residency requirements, for example. We also know that most enterprises have compliance and security requirements. So when we built this service from ground up, we made sure that these are the services support for GDPR, SOC, ISO and other compliance frameworks and it is also HIPAA eligible. When we build generative AI applications, we want to make sure that the responses are safe and not toxic.
So the best way to do that is to put in some safeguards within the hole in the plat. While you're developing these applications, bedrock guardrails allows you to basically put these safeguards at both the application level or across the organization. You can do things like. Put some guardrails to say, avoid any toxic language. Basically, you can put filter these things. So when a user interacts with these applications behind the scene, the service actually looks at both the user's request and the response, and sees if it falls into those categories you marked as not accurate, then it's going to block them there. Right. So this allows you to meet one of those AI responsible AI features, right. Is to be your your organizations can have a policy as to what is your brand voice.
Right. And what are your leadership's generative AI goals? You can incorporate all these things used using guardrails. So harmful content is can be filtered. You can block topics. And very soon we'll also be introducing a feature to basically redact any sensitive information. For example, when you have a call center chat, after the chat is over, we can scrub the PII data from that chat. Although we have all these features, there are some challenges with generative AI because generative AI models are trained on general knowledge. They don't have the context of a business.
This lack of context basically might give you answers which are not relevant to you. The other concern we heard from our customers is security, right? So within the organization, many, many organizations have policies like who has access to which data. Like you don't know these models don't know who you are, what your organization is, what role you play, and what level of access you have. So that is a big security concern. Similarly, data privacy and compliance are like features which most enterprises are dependent upon. We know this was a we knew that this was something which needed to be fixed at AWS. And we went ahead and created a service called Amazon Q Amazon Q basically is built is basically a generative AI powered assistant, as I was mentioning in some examples in the workflows for underwriting workflows where this can be used, this has been built ground up with safety and security as the foundations. For example, you can point Amazon Q to your knowledge bases, right?
We know that most customers have a lot of information spread out in the organization in multiple locations. It could be wikis, emails, S3 buckets, or even Gmail or any of those sources. You can simply point Amazon Q to any of these sources. It will take in this data, analyze it, and help you have conversational dialogs with the assistant. So now it has that additional business context which you have. So in the case of the underwriters, as Diane mentioned, for example, let's say your organization has policies as to what are your guidelines and policies for either approving or disapproving a particular loan. This queue you can feed the data into Q and Q will basically use that along with the large language model and then give you a more meaningful response to you. What we've seen is that Amazon queue, when you give when you ask a question to Amazon queue under the scenes, it knows what is the best performing model available to do that particular task, and it will route it to that. Now here is a quick overview of Amazon Queue.
As I mentioned, it delivers quick and accurate relevant information because it has your business context in a secure and private way. It can also execute actions out of the box and has custom plugins. So basically you can automate certain tasks from the output which you get from Amazon Queue. It respects access control based on your user permissions. So let's say a user cannot access a particular drive outside then within Amazon queue. Also, he will not be able to access the data through Amazon queue. So we have built in security from ground up in Amazon queue, which is enterprise class and which is what our customers are looking for. And of course I talked about connectors. We have around 40 popular enterprise connectors to various repository document repositories from which you can take in data.
Many organizations have data spread out in many locations, but using Amazon Q, we simply point it to those resources, to those repositories and it can absorb that data. The next thing is it enables administrators to easily apply guardrails, as we talked about the responsible AI dimensions. Here you can make sure that all all Generative AI application developments, you know, are following some guardrails as decided by your organization, right? And finally, it basically streamlines tasks with user created lightweight applications. So this can easily be integrated with even a Pega Pega workflows. Right. This is I'm sorry. This is not moving. Cool.
Let me quickly talk about how AWS and Pega have been innovating. Pega and AWS have been partners for many years and we deliver to customers. We will. We are. We are bringing the most impactful business applications to life in the cloud. Together we deliver like low code platform delivering AI powered customer engagements, workflow automations across our clients cloud transformation journeys. So Pega is delivered. Pega Cloud is delivered as a SaaS powered by AWS and is available through marketplace. And here are some joint solutions we have worked alongside with Pega.
As I mentioned, we recently announced a generative AI plug connector from Pega Platform to bedrock, which you can start using. We also work with our partners Softserve, Pega and AWS and Softserve Master Key Accelerator. This is something I would encourage all of you to take a look at it at our AWS booth or the booth for a quick demo on that, and and some solutions like enhanced intelligent document processing. The workflow, which I was talking about, we actually have a demo of that Built into the Pega Platform and you can take a look at that too. Pega has always been using machine learning with SageMaker and with their Process AI API. You can bring your SageMaker workloads within Pega and Pega. Voice AI integration can be used along with transcribed to translate between languages. And. Like this.
We are continue to work with Pega and our other partners to bring more joint solutions for all of you. Thank you very much. And here are a few resources. I hope you guys can take a quick snap of this and visit us at the booth for a demo and of our joint solutions with Pega. Thank you. And now we'll open up for questions. So if you have any questions we'll be happy to answer. Microphone. If you.
Please. There's a mic there if you want to. Hello there. Hi, Lakshman. It was really illuminating seeing the partnership. Um, we are a company that's heavily invested into Pega and AWS. And our call center is living and breathing in Kca's Amazon connect queue for connect transcribe Amazon. Lex, I wanted to ask, uh, in the roadmap joint roadmap of Amazon Connect and Lex with Pega call centers customer service, are there any unified solutions coming because we are seeing some issues with like placeholder integrations between these three platforms, and it's not been the smoothest sale. So is there any are there any plans of doing so?
We are working with Pega customer service team to identify the areas where we can innovate together. And definitely in future you will see things which will streamline this experience. So and I think I would encourage you to provide that feedback to your AWS account manager or like just reach out to even Pega account managers and provide that feedback. And I think that will help us deliver more streamlined experience. And we're willing to help you today. If you if you come to our booth, maybe we can have a detailed discussion and understand your use case and see if there is something that can be done in near term. And then we can even talk about some of the strategic things that we can do in long term. Sure. Thank you.
You can go ahead. I'll wait. Yeah. So just thinking about implementing generative AI responsibly, like the title of the session, you think you know, read some of the things privacy, security, explainability, governance, transparency, Controllability. A lot of those things, sometimes from the outside world, don't scream Generative AI. And, you know, things like security, controllability, those type of things. What would you say outside of those key fundamental points that you shared on the screen, would be the biggest, like thinking about large corporations that have big roadblocks with Generative AI. Just even the word generative sometimes gets excited. Uh, how would you say that?
What would you say would be the best way to either have those conversations about moving past that, or recommendations on moving past some of those things that could be quote unquote, scary. So I think security I have worked in cybersecurity in my past life. So I know, like some of the concerns are like people just say, I have security concerns, I don't want to move to cloud. But I think what helps is peeling the onions. Right? So there are definitely there are legitimate concerns when someone, somebody asks questions. But then there are always solutions available to solve, like to address those concerns. And I would say what we are seeing like last year, like when customers started adopting Generative AI, they started with internal use cases because there is always a lot of friction when it comes to applying Generative AI to your external customers. So it is a little bit easier to start with the internal use cases, learn from them and like you can showcase those use cases and the work that you do to your security organizations, and then maybe come up with the roadmap around what are the security concerns, how can we address them?
Have a conversation with your cloud partners, AI partners, and even Pega. And I think definitely it starts with the conversation, to be honest, like and we need and understanding each and every concern and diving deep into them. I think you heard the question. So today, if I have to deploy a no Pega application as a Kubernetes service, right. So we get the Docker images, we create the helm charts and everything we deploy creates a product, creates a table, everything so that we can run the, you know, Pega as a service when it comes to scalability. But we have to do it everything in silo today. Is there any solution that AWS is thinking of to kind of like, uh, customize this in terms of deploying Pega as a service, whether it is X or X or whatever it is. So Pega Cloud that's the answer. So it is a managed service.
So you with Pega Cloud, it's a SaaS SaaS service. You don't have to worry about managing any of the infrastructure. You can directly start working on your business applications on top of it. But if if we have to deploy it in a native cloud account like AWS that we have. So is there a. So I think Pega has published guidance around how you can deploy a Pega on ex. OK because it supports only ECS. So my question was like do, is there any plan to expand that to support an ECS, Fargate or anything like that? We can we can definitely take your feedback and see if there are like I'm not aware of, because that's Pega roadmap.
So I'm not like I cannot share that here. So probably you need to have a conversation with Pega folks. Thanks. I think a lot of the implementation questions we can certainly redirect to one of our global cis and self-serve being one of our partners in this implementation with the connector and the master key. You should certainly approach an ACI like soft serve questions like that, because they're the real experts on how to deploy into AWS customer cloud. Sure. So I have a question about the document analysis. How effective is the AWS? Document analysis?
Because some of the submission documents that we get in the insurance industry are handwritten. We couldn't find a comprehensive solution in, you know, that analyzes these documents, which are handwritten. Sometimes it's hard for us to read those documents. So Textract has the capability to extract handwritten documents. Now, in terms of accuracy, it will depend on like you'll have to run the test and get the see if you're getting the accuracy that you need. But then on top of that, you can also like there are ways you can customize some of those features to meet your requirements. But definitely like if you come to our booth, we have a demo for document processing and we can show you like some of the like handwriting capabilities there. Right? Yeah.
Our documents are stored on S3. So okay. So are you familiar with the Amazon Textract. We tried putting through some of the documents. Okay. It didn't work, so I don't know. So you should like. If you are AWS customer, you should work with the account team and they can bring in the intelligent document processing specialists to work with you, because sometimes it's just the way you use some of these services, right? If you are just directly calling the APIs, you may not get the results, but there might be clever ways of utilizing those services to get the intended result that you want.
So definitely, I would encourage you to talk to your account team, work with them, work with the specialists. All right. Thank you. Yep. Any other questions? All right. Thanks for your time. And please visit AWS booth in Innovation Hub.
Related Resource
Product
App design, revolutionizedOptimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.