PegaWorld | 39:17
PegaWorld iNspire 2024: Pega and Google Cloud: Driving Innovation with Generative AI
Discover how Pega and Google Cloud synergize to empower businesses to unlock data-driven insights, optimize operations and drive customer-centric solutions. Explore Google Cloud’s Gemini, Vertex AI platform, and CCAI alongside BigQuery, Feature Store and Workspace. Core benefits highlighted include intelligent, personalized customer engagement, process automation for increased efficiency, and cloud transformation for scalability and innovation. Specific industry use cases will demonstrate the value of these technologies in practice.
Welcome, everyone. Hello. Thanks for coming to our session. I will let our team introduce themselves in a moment. I just wanted to say hi. I'm the Pega partner manager managing the Google Global relationship here. And we're at booth number 14. If you haven't stopped by and you want to come there and see us as well. And one of the questions we've been asked throughout the last 24 hours is, you know, is Pega cloud on GCP available?
And yes, it was announced last year at PegaWorld. And we're going strong and we're doing a lot of innovation together. And I'll let the team carry on and talk to you all about it. Thank you so much Despina. So super excited to be here. I will not say anything that will embarrass Hakim, I promise you. But thank you so much for for joining the session. We're super excited to be sponsoring the conference and being here together with Pega, um, who's a valued partner of ours, but not only a partner, we actually use Pega inside of Google. Um, and so I see some of my peers here, um, at our session, and we're happy to share the best practices of how we use Pega.
And Rob are going to talk a little bit about what's coming, um, and how we're transforming our collaboration with Pega. So as Despina said, if you have any further questions, feel free to interact with us, come to our booth or talk to the fellow Googlers that we have in the audience. So without further ado, you guys take it away. Awesome. Hi everyone. I'm Hakeem Garcia, principal cloud architect, uh, with Google. I have been with Google for four years, and for the last two years, uh, have been working really closely with, uh, with Pega to really innovate and work together. And hi everyone. So I'm Rob smart.
I am a solutions consultant at Pega, um, based in the UK, working with our Outcomes customers who Pega for about eight years. Awesome. I think the session has two parts. I think the first part, what we want to highlight, as I mentioned, so so Pega Cloud run on on Google Platform and has been running, uh, you know, for, for a year or so and supported. We want to showcase what are the capabilities that both solution can give to our customers from an innovative, uh, technology and on board side. So the first part of the session, we'll talk about some building blocks that customers can, can leverage. And the second part, will it, depending of the time, will focus on a couple of use cases, how all those building blocks come together. The first one that really dear to my heart that is really is vertex AI. Vertex ai is kind of our end to end machine learning platform that we we can leverage and our customers can leverage to build and train large language models or any machine learning model.
So it's really it gives our end user a best way to to train and serve. It's also it's an end to end platform. It's an end to end platform, uh, that have all the capability that any developer needs. Also, it's really, um, allow customer to deploy at a large scale depending on their use cases. And it's grounded in customers. Data is grounded in truth. And finally, it really vertex AI has been built on kind of an enterprise ready for our customer from a compliance from a security perspective that really can, uh, enterprise customers can feel good about deploying vertex AI. So if we can take a look at little bit deep dive in it, how how it's how it's built. So vertex AI think about it as a as a platform.
And with multiple layers depending of your expertise and your journey into AI. So in the foundation of vertex AI, you have the infrastructure from a GPU to TPU perspective. We give that to your customer at your fingertips and the latest GPUs and TPUs. So that's your foundation. Any machine learning, uh, developer can can take advantage of that infrastructure, uh, to build their machine learning model. Then you go to the next level, what we call the vertex AI model garden. Right now with the upcoming of GenAI and with all the large language language model, uh, diversity, what we do in vertex AI, we create models for different use cases for your own consumption. So we give you choices at your fingertips. You can take a model, train it, test it, deploy it to production.
Then you go to the next layer, which is really, really what we call the AI model builder. This is geared toward data scientists, machine learning developer, where we give them really an end to end, uh, platform to develop machine learning. So from training, serving, uh, prediction, fine tuning, you name it. So the whole machine learning ops at your fingerprint, uh, fingertips to, to to do that. And finally we also cater toward developers. So um, some customers have a small use case or a rapid prototype use case or they don't have a data scientists in-house. So they can use out of the box agent that do specific tasks and really easy to deploy at a large scale and quickly and finally, finally, what we call AI solutions. So again, um, this is think about it. Uh, you want to leverage a specific use case like document processing or contact center.
Uh AI. So those are kind of solutions we offer to our customers. Pre-built solution based on, uh, Google expertise in AI and ready to use the again, the vertex AI ecosystem work hand in hand with the latest and greatest of machine learning. And if, uh, if you heard the latest right now, generative AI is the big boss. And also around large language model. And the approach of Google has been very, very unique in a sense where we are creating an ecosystem of large language model. We call them Gemini. And really it's Google coming together from our deep mind Expertise. Google research in coming up with those large language models, and then think about it as a family of large language model geared toward specific use cases and specific tasks.
So the way we we look at it is for each specific task, for each specific use case. We had different models. So we have uh, recently we announced, uh, Gemini Pro. We have Gemini Nano for a small use case like a, for an edge, uh, solution or a mobile phone. We have, uh, Gemini, uh, flash .1.5. We just recently announced with a specific for a simpler use cases, and we have, uh, Gemini Pro 1.5 and Gemini uh, ultra. Again, each different LLM can be used in a different use cases from a simple summarization, uh, to image generation to, uh, you know, large context, uh, uh, windows, uh, processing. Again, the whole ecosystem of Gemini has been built in a multi-modal kind of, uh, from the ground up, which means you can ingest text, document, video chat in one context, windows. And it's able to process this.
Process it in, uh, in one shot. So we talked about Gemini multiple family. So Gemini 1.5 Pro is kind of our midsize kind of really breakthrough model. The thing I want to mention about this Gemini 1.5 Pro specifically is the context windows. The context windows, it can be accessed up to, uh, 2 million tokens in in the context windows, which allow you as a customer to really leverage a complex use cases, complex document, complex format and process it in really a rapid scale. And again, what we give you along with between vertex AI and Gemini, we give you also the capability to filter the context to really, um, control what kind of prompting you are using. And similarly, what kind of response you get within an ethics and safety of your, of your, uh, customers. So let's bring it back together a little bit. So vertex AI, think about it, especially in the context of, of Pegasystems.
So you have a Pega Platform leveraging, uh, multiple workflow trying to solve business process from customer. Then if you need a unified AI platform to complement what Pega is doing to create model to train model to serve model at a large scale, to experiment. Vertex AI is really work greatly together with Pega to allow you that, as I said, and it gives you all kind of the pipeline of machine learning, uh, you know, framework in one kind of single framework. So this is again, we we can go deeper. If you have more questions about vertex AI, we can we can we can go deeper. Uh, the thing I want to mention is kind of the model garden, the the beauty of the model garden. Again it depends on your use case. It depends of your, uh, of uh, uh, your approach. We give you the choice.
As I said, we have the Google Cloud Foundation model from the old Gemini family. Gemini Pro 115. And again, depends on the task we also give you, uh, out of the box, uh, uh, model specific for, like, for, uh, for domain specific, like sick farm for security and cyber security alarm for for medical and research. We have kind of specific task about OCR as a model. Text to speech model and so on. So you really have kind of a good choice depending on use case. We also create uh, in those uh, model garden open source model like llama code if you are, if you are interested in leveraging those in in the model garden and the beauty of the model garden, um, it's, it's, uh, think about it as a colab kind of experience. With one click, you can deploy the model, test it, do some prompt engineering into it, get some responses. And finally, if if you like what you see, you can deploy it in production and start serving really within within a few, uh, few clicks.
Uh, one thing, one more thing I want to mention around vertex AI and Large language model what we're seeing also with our customers, sometimes those large language model, uh, doesn't meet your specific use cases or you have a specific data that is proprietary to you and you want to leverage it. Internal data, customer data, you want to take advantage of it. We give you the option also on top of those foundation models, you can train them. You can train them. Basically the first step is you can do just basic prompt engineering. You can train it to to respond to your specific use case. You can go a little bit deeper. And what we call we can adapter tuning, adapter tuning. You take a large language model and then you bring your own data.
You train it with the Large language model. You end up with some weight from uh, from the, the data you train and the foundation model. And when you do the prediction, we combine the two. So this is really we see a lot of customer leveraging this where their data is secure within a Google Cloud project is their own data. Nobody sees it. It's in your cloud storage. Your when you do the training. It's also exported into uh, Uh, Google Cloud storage. So everything is secure within your own environment.
So then the next step you can do reinforcement learning where you have a human in the loop to the next step. And finally another method is distilling. So the summary is really, really giving you the option to customize a large language model specific to your use case and your data. Uh, one last quick thing about vertex AI agent Builder. This is the if you want to do like a quick search kind of application or a conversation application. So we give you a really a without knowing too much about machine learning or you don't have the expertise, just basically load your data, load your document, and within also few clicks, you can end up with a Google like search based on your data. And when you prompt it, you get not only a response and you get the source of the response. Really, really quick way to create an orchestrate agent at a large scale. Speaking of data.
So, um, vertex AI, machine learning are all great as a tooling. They're nothing without the data. So what Google has, which is unique, is this large serverless data warehouse called BigQuery. It has been there for a while. It has been proven as the go to data warehouse for any use case, uh, data analytic, log analytic, anomaly detection, you name it. Um, customer information. Customer 360 information. So this is your data warehouse where you can store all your data and access it in a light speed. And why this is really important.
Think about when you have a Pega workflow. You have a Pega a business process you need when you are a GCP customer, your place where to store the data and access it from Pega is BigQuery. Not only Pega can access it again, all what we saw around vertex AI as a machine learning tooling. You can access the data and create it within BigQuery. Again, it's really, really a powerful tool that if you are deploying Pega Cloud, you need to you need to be aware of of the of BigQuery as a data store customer data platform could be kind of a good place for, for for BigQuery. Uh, another thing I want to mention here, and we can talk about it a little bit in the use case is vertex AI Feature Store. Feature store is when you do a machine, any machine learning modeling, uh, there is a concept of feature where you define your feature. You you store them and you serve them, um, uh, to your machine learning model. And you need to track, keep track of them.
The feature store is also built on BigQuery, but also it it gives you millisecond, 20 millisecond, 30 millisecond response, which is really, really critical when you're talking about CDH Pega CDH does decisioning within like 200 milliseconds really in near real time? For that kind of quick response, you need a feature store that can respond really quickly. And that's where we're seeing a lot of solutions built around vertex AI, Feature store and BigQuery to give kind of CDH kind of access to the data it needs in real time or near real time. Um, another service again, I'm, I'm going a little bit quick with all the services that we have, and there's a lot, but I'm trying to focus on some important one is document AI. Document AI is those those are the services at a higher level where are based on machine learning based on Gemini as a large language model. But you don't need to to deploy or to, uh, to manage or develop. It's really geared to what processing document. Again, think about a claim insurance use cases. You know you have a workflow within Pega that does that.
But in the same time it needs to process a large amount of of document. So document AI is one of the service you can do. It can scan your document, OCR it, extract specific entity, store those entity to be used either in BigQuery or any any other data store you need and can be leveraged from a Pega workflow. Again, we're seeing a lot of use cases from customer where the combination between the power between Pega and document AI give a great, great solution. And the beauty of of document AI is really integrate well with all our ecosystem from Google Cloud Storage, Vertex Search, BigQuery and so on and looker to really, really give uh, end to end solution to uh, to any, any, any use cases. Uh next slide. All right. Thank you very much. So thanks again for giving us that overview of all of the different Google services that are available.
I want to do now is just talk about how you can use them in Pega. So if we look back a couple of releases ago now in Pega Infinity 8.8, we provided the ability to have a repository, a connector to Google Cloud storage. This allows you to via the services from Google who was mentioning before, have your output shared in a cloud storage bucket that you can then bring into Pega. So it's an option for file transfer. It's reliable, but obviously we're looking to provide more options to more natively use those, uh, those services. So as we've just announced in Pega Infinity 24.1, um, there is now the ability to connect to pub sub uh, meaning that you can stream data and events in, in real time. So this is a real time data set that's available in Pega, much the same way that you might use Kafka. And it's great for bringing in events. Perhaps it's for communications.
It might be network event dropouts that are coming in. But you know, for banking, it might be when there's a change in a risk profile or something streaming that data into Pega that you can then react in real time, whether that's to populate in for decisioning or if it's for changing the process of, of a, of a case that's in flight. So yeah, lots and lots of uses for this to be able to do things in real time. And that's available already in Infinity 24.1. Hakeem spoke there a lot about vertex AI and the options that are available. There's now the ability with Prediction Studio in Pega, um, to uh, connect to Google Vertex AI and use those models. Hakeem was talking about directly in Pega. So, um, this is only for predictive models. Um, obviously as we're using Prediction Studio, so also ML, XGBoost, scikit learn models, as it says here.
But these are sort of models that might be things like what's a churn risk of a customer, which you might use in decisioning or you might use in case processing. It can also. Yeah. Any anything that you can think of as a predictive nature. You can embed where you need it in a process really makes, you know, driving towards that autonomous enterprise easier by using the predictive nature, looking a bit more into the future on the roadmap, what we have upcoming, um, there's going to be native connectivity for BigQuery as well. So there'll be a database data set in Pega, giving you the ability to query data straight from BigQuery. So I think this is, you know, a much wanted feature, um, that's got a wealth of different use cases, not least in Customer Decision Hub a little bit further down the line. There's plans for also Bigtable as well. So if you need super fast access to, um, data, particularly for real time decisioning use cases, then there will be again another connector for Bigtable.
So again, really allowing those of you that have already invested in your Google Cloud data platform to be able to use it alongside Pega for decisioning. And as we heard at the keynote announcement yesterday, um, Gemini is going to be able to be selected as your large language model in Pega for your generative AI use cases. So where you're seeing buddy being used, where you're, um, uh, any, any of the hundreds of GenAI use cases we've probably showed you recently, you'll be able to select which model you want to use. And of course, one of those will be Gemini and the Gemini model that you want to use. So yeah, all on the roadmap all coming up soon. And um, just to reiterate as well, the fact that Pega can now also be as of the announcement last year, Pega Cloud can run on GCP. The fact that you can have all of these services running in super low latency without any ingress egress costs is a huge benefit to you. Oh, sorry. One more on me.
And of course, those are the out-of-the-box connectors that we're mentioning. There's a ton of other Google services that are there that you can still integrate with Rest APIs, of course, in the Pega Platform. So those other ones allow more native, faster connection. But yeah, you can always use those Rest APIs to to make and leverage the the other cases. Awesome. Thanks. Yeah. So we talk about services from from a Google Cloud part. There's a lot uh, we didn't mention even cloud function.
We didn't mention other services that customers can leverage. Then how all come together. What are the patterns to connect? Uh Pega Cloud. As a service to customer GCP infrastructure to on prem, uh, back end system. So at Google we have a service called Private Service Connect. So if you need, uh, a private connectivity from uh, one Google project to another Google project, it can be in the same organization or on different organization. We use the concept of Private Connect DSC, where you have a producer and consumer of any service. This allows you to really have a really integration between Pega and any any endpoint securely, privately, without going out over the internet.
So we see a lot of customer, um, in the next slide, you can see it happening together. So think about it. Let's say you have uh on the right hand side, you have Pega Cloud uh, on GCP running with all your application. Then, uh, you as a customer, you will have a GCP project, and also you will have all your on prem, uh, legacy system. So the way to make all this connectivity seamless and what we're seeing, the pattern we're seeing to to integrate that so that you will create a private service, connect between the Pega Cloud as a SaaS offering on GCP to your own GCP infrastructure. And then from your GCP infrastructure to your on prem infrastructure. You either use use a cloud VPN or you can use Google Interconnect. So again so we talked about vertex AI. We talked about all those applications.
The way to to connect to them is either through basic Rest API and SSL. Or you can use the private service Connect. Again this is kind of a pattern to at a large scale you can connect and integrate both systems. Again. Any more questions? I'm happy to go in in detail. So now we we we talked about the the pieces the the the building block that you can leverage between Pega and and Google Cloud. So what kind of use cases can be leveraged and what, what use cases we are seeing in the industry that has been solved with this, uh, with this technology. So around GenAI, uh, in in the last year or so, we see a lot of maturity in the use case that are being solved right now with GenAI.
And we can mention really quick most of them around knowledge, knowledge, information within a customer, you know, enterprise context or with the customer of the customers. So it's a knowledge information as Pega call it a Knowledge Buddy. So we see a lot of use cases around that, and it can be solved with vertex AI. It can be solved with, with, with, as I said, Knowledge Buddy. So those kind of the use cases we're seeing. Another thing is content generation. We're seeing a lot of use cases, a marketing campaign where you need to generate, uh, either images, logo, uh, you know, a mix of image and logo and text. That's a use case we are seeing solved by, uh, GenAI. One thing, um, one use case.
We want to dive deep a little bit is kind of real time customer engagement. Uh, this is this use case is based on the the TM forum winner Catalyst of last year. So last year Pega and Accenture and and Google created kind of a Catalyst that really showcased the power of the of of Google Cloud and Pega and how we can really solve customer engagement. The use case is really around, uh, telco is around CSPs and communication service provider. And what we were seeing or they're seeing a lot of challenge around customer churn, around, uh, legacy system. So the CSP has a lot of legacy system where they can notify a customer multiple times a day with out of context, uh, one time, one notification, say, oh, we have an outage in the same token, after half an hour, or they provide him with, with the, uh, with an upsell notification. So really spread out notification where created kind of a resentment within the customer and created customer churn. So this Catalyst that that we built and that as I said, it won the TM forum try to really solve those problems, reduce customer churn by 20%, increase the revenue, and also increase lifetime lifetime value. And the solution is you're going to see it.
And we can take a use case with really built with all the building blocks that we we discussed earlier. So to to bring it home, let's think about an example of a customer in this case is is Matt. So Matt has been a customer for the this CSP. But he can start showing some some frustration around his kind of the performance of his service. So you can see that some, some building up uh, of frustration. And he's really thinking of switch a CSP and the way, the way that uh, with the data that has been collected from from mad behavior around, you know. You know, his web browsing history. All that data was collected and was important in BigQuery and some machine learning model around churn, around, uh, cross-sell was run into those data was collected and was identified that really Matt, is a high risk of churn really within within real time. So that triggers um some some action.
So when when the model detected there is a high risk of churn. An event was published in in pub sub. Then what happened when we published that Rob. So the pub sub event comes in which Pega is monitoring. Uh, and then that in turn triggers a decision, a next best action to work out. Okay, we've seen there's a high, um, uh, risk of churn here. What are we proactively going to do? We've also received, um, some events coming in saying that there's been network dropouts. So we know that there's some technical issues with the service that Matt's having.
So what we do is we make that decision in real time about what's the best thing that we can do to try and prevent this. We look at the customer profile that's in Pega that's been populated from all of the data sources we mentioned, such as BigQuery, from all of the enterprises data. We work out that his marketing profile, Matt, is he's a movie lover. So Pega determines the best thing to do. A best offer that we can make is sending out a three month free trial for unlimited movie streaming. So, uh, that, um, message is sent directly out to Matt, but it's personalized as well using gen AI. So we make sure that we acknowledge what's happened already. We're empathetic, uh, to what exactly the situation. And we're tailoring the message that goes out.
So Matt gets the message. Come in. He's happy with that. You know, we've he's received acknowledgment from the from the provider that this issue exists. Uh, and he's understood he stops browsing terms and conditions site to see how he can already exit your contract in real time. We're constantly redesigning deciding what next to do. We work out that the model says there's still a churn risk even after this. Um, so we send up a follow up message to Matt that is a bit of a service action, a guide that tells him how he can change some settings to try and improve the service that he's getting as well. But we know Matt's quite tech savvy as well.
From the customer data we have. So again, we use Gen AI to tailor that message to make sure that we're not overexplaining to someone who is tech savvy like Matt, exactly how to apply these changes. Awesome. And you can see the flow here is really every message we're sending is within context, within exactly timely manner. And the next step we we reassess Matt's profile and we still think is is is at risk of churn. So then as the next step, we offer him kind of a monitoring service to, to monitor his, his service provider and finally said, okay, perfect. I see all this kind of help or really cater customization to to Matt need and he he stay within within the CSP and he really avoid you know the the CSP avoid the churn of math and and he stays a loyal customer. Again the we leverage the power of the data, the 360 data of the customer profile within BigQuery and with a vertex AI machine learning model from, uh, you know, upsell to, to churn detection to the core of the solution, which is CDH Pega decisioning within 200 milliseconds. And we can notify the customer with multi-channel kind of notification depending of of the context.
Really, really, uh, powerful kind of, uh, end to end solution. Anything to add? Yeah. I think just to emphasize that, you know, the importance there of. Yes, you've got that central decisioning brain in the middle, but it's connected across the Google Estate, getting the value of all of the data that's out there with those connectors we mentioned. And on the other side can be connected out to all of the channels, both, you know, own channels, third party channels. It's a consistent message that's going out all the time. So yeah, just to mention that perfect. Just to bring it home.
I know it's a heavy slide, but this is kind of the architecture, the kind of the one the the TM forum you can so, so Accenture build kind of the data, the CDP platform, the customer data platform with, with data source from all over, uh, multiple sources. We curated the data, creating multiple kind of, uh, layer at BigQuery to be used. Then uh, they created also machine learning model on vertex AI, like the churn detection, uh, cross-sell propensity. And finally, you can see CDH as the heart making decision in real time because it's leveraging vertex AI feature store and then notification omnichannel notification really end to end. We don't have time to talk about the next step. What we are doing this year is we are building on top of this and adding more GenAI. We're solving more use cases, leveraging GenAI and the multimodal functionality. I think we should stop now to get some. Probably.
Yeah. Gotten some questions. Yeah. Let's stop now. We we we throw a lot on you. If you have any question around any product, any service, any integration where we're here to answer your questions. Perfect. Yeah. I have a question about the document.
AI. Um, I know you showed how you can take all the different modalities and get it in the repository. Uh, does Google provide a way to do the intake from the different modalities like fax and email or, Uh, at some point we need things like paper mill, too. So how do we kind of close that workflow? Yeah, I think, um, again, the answer would be it depends. So the the source of data, you how? For example, let's talk about fax. You have it as a hard copy. How is it.
So, uh, would Google provide a fax number saying okay, if somebody wants to fax. Okay. So what we do. So again document AI think about it as a solution to really process, uh, the documentation we have partner for example, uh, Iron Mountain, we work with a lot. They have a product called insight on top on top of of document AI which they take care of the whole ingestion part from a, from a physical document to, uh, to an OCR to any, any source of data. So they will take care of the process or you have you have to build it yourself. It depends on kind of you use it. But as I said, we have a lot of partners that take care of the ingestion to OCR the document, and then you can continue the pipeline. And is there intelligence to recognize the type of document depending on the great question.
Great question. So we have what we call parser parsers. So the concept of parsers. So let's take a W-2. So we have a trained model that discover only W-2 and W-2. It identified which region which country. So which language. So that's building the model. So we have a lot of most of the the common documentation that you see out there.
We have parser for them. But let's say you have a specific use case. You have a document nobody sees specific to your organization. We have you can train it. So you can scan it, train it, identify the kind of the the entity that need to be extracted, and you can process it and then it become one of your extractor. So it's really a flexible platform to ingest any new new document. And with Gemini the beauty also. So this is kind of a structured data well known document, and you can also leverage Gemini with the large context windows to extract really large document that really that don't fit in any specific format. Thank you.
You're welcome. Any other question? Yeah. Good architecture, which I don't understand pretty much. But why is it the triggering events are outside of the customer data platform? Because one of the things that CDH uses more is triggering events. So how do you connect those trigger events with Pega and the architecture of the Google Cloud? Yeah, a great question. Because it comes from everywhere.
Perfect, I think a great question. I think one thing you have to, uh, it's a general theme between Pega and Google. It depends on your use case. Correct. You you have the flexibility. Let's say you want to undo the ingestion from CDH. You can do that in this architecture because it's a what we are trying to build here. It's a bigger and bigger solution for any use cases. And we we find as a common standard as a pub sub as kind of your, your, your, uh, your streaming data, your entry point because I don't know if you see it.
Also there's two entry points. So there's the pub sub as entry point and as there is your ETL job also to into BigQuery. So again you have the flexibility if, if, if, if a Pega ingestion will be make sense for you. You can do that. That's you don't have to stick to exactly this architecture. Make sense. Thanks. Any any other question. So if you don't I will show you kind of the next uh, the next step.
So this is the the other use cases we are trying to build, uh, with the team from two. But I want quickly to show the new architecture here where it shows contact center, AI, vertex, AI, Gemini and CDH and Process AI all also coming together to solve a bigger, bigger problem, more complex problem than we just described. I think what's important here is the use of intelligent automation as well. So it's the mixture of CDH and intelligent automation, together with all of the other great Google stuff that we've already mentioned. So this is a really interesting use case. Awesome. I think if there is no more question, I think we can go through the other quickly the other use cases. So I mentioned claim review again here. Think about it.
When you have an insurance claim, uh, you can really, uh, leverage Large language model document AI to, to process document at a large scale. You have Pega as a side, as a workflow. And you can really solve solve problem like that. Um, again, we have content generation, machine content generation. Same token, the value here is to be more creative and increase quality and agility. Same thing you can create with Pega. You can create in vertex AI and Large language model you can generate content. So it's really a sky the limit between Pegasystems and Google, uh, whatever makes sense for you as an architecture, you have a building blocks on both sides to to build solutions to solve business use case. Any any closing words.
No, no I think yeah I think it's just with what's coming on the roadmap, we're now at a point where you can really benefit from, you know, the existing investments that are probably already there for many of you in Google Cloud data platforms, along with Pega, and actually start driving that vision of autonomous enterprise that we spoke about. I think all of the building blocks are here, working together with the data and the ability to do the automation and the insights from Pega. So we're at a really great moment that all of this is actually a reality. So it's an exciting time. Awesome. Thank you. Thank you.
Weiteres Informationsmaterial
Produkt
App-Design völlig neu gedachtOptimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.