Zum Hauptinhalt wechseln

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 42:00

PegaWorld 2025: Unleashing the Power of Pega Cloud: A Deep Dive into our Scalable Architecture built on Kubernetes

Discover why Pega Cloud stands out as the premier choice for running and operating Pega applications. In this deep dive session, explore the robust architecture of Pega Cloud, designed to seamlessly scale with any business needs. Learn how Pega Cloud leverages global expertise to deliver unmatched performance and reliability, ensuring your business stays ahead in a competitive landscape. Join us to see how Pega Cloud can transform your enterprise operations with its cutting-edge capabilities.

Financial services industry. Maybe you need PCI. Well, actually there is no difference. And Dave is my boss. I feel it's a tricky question for me, but our audience should know that, uh, the biggest change we made with Pega Cloud three, and I like this change the most, is simplifying our security into a single solution that meets all of our client businesses. So that actually allowed us to take the best security standards across all the industries and apply them as a single standard internal Pega Cloud standard, which which is our secure by default architecture principle. Awesome. Now. Thanks, Kamil.

So still within that broader infrastructure layer, I want to take us down a little bit deeper into some other crucial parts, starting with the database. And as you can see, each environment gets its own dedicated database and each one is encrypted with a different unique key. And decisions like that help ensure that we have appropriate resource allocation and also isolation. Speaking of databases, we also treat resiliency and availability very seriously. And the database is synchronously replicated to a standby. And we also have read replicas which can handle read only traffic offloading some of the offloading some of the requests from the primary and improving performance overall. And again, all of this is encrypted both at rest and in transit from the ground up. So wait a second, Dave. So let's assume I have an application, right.

It's very critical for my business. And what happens with that application when this primary database you see there goes down. Great question. Resiliency resiliency, resiliency. So each of the databases you see here actually has redundant disks as well. But let's say you lose an entire data center one availability zone. In AWS terms. We can actually proceed as if nothing happened from a from your application perspective, it is not impacted. In fact, we can lose any two availability zones in one of our regions and still maintain application availability.

There's no impact. Awesome. So taking this down one more time now let's also talk about file storage for each environment. And this is used for things like attachments or other file based data. And as Camille mentioned earlier, there are several other key backing technologies that also power various Pega capabilities like Kafka, OpenSearch, and Cassandra. And all of this is fully managed by Pega Cloud. It's part of the service. You don't have to worry about any of it. You can focus on the value of your business application, and we ensure data for all of these services is encrypted, both again at rest and in transit, spanning across availability zones and ensuring high availability of all backing services.

Now Generative AI models. Now these are obviously very important, especially for autonomous enterprises. And we're partnering with leading AI providers to provide the best in class large language models that your applications can use. From an architectural point of view, integrating with these powerful but external AI services securely and ensuring low latency and managing evolving capabilities, it must present some really interesting challenges. So how do we ensure that what we have city's future proofed? Well great question Dave. And you're right. This is certainly a dynamic space. So we have no other option.

We focus on abstraction layers, like for example, GenAI gateway, right? Which you see a couple slides before us. And also I guess security. Security is very crucial here. We focus a lot on secure connectivity to those models, as well as on very strict data handling policies. And it's all about building for change, building for flexibility to adopt all and leverage all what our GenAI partners provide us. And I can tell you, it's changing like, I don't know, two every, every, every three weeks. And it makes this project super exciting for me. That's awesome.

And there's so many new GenAI capabilities that we're hearing about this week. So would you like to take us back to the infrastructure layer? Absolutely, Dave. So just to quickly recap the infrastructure components we went over, we covered client VPC. We covered Kubernetes clusters and all the services. They're there a highly available database, file storage, core backing technologies? And most importantly, the GenAI, our GenAI partner integration. And just to give you the sense of the scale, we operate hundreds of stacks like that across 36 different regions, both AWS and GCP. And that makes us cloud architects to think about every single decision we make, to think about scale of our business, and be very careful with all the decisions we are making.

Everything must be automated from the very beginning. That's a fundamental architectural rule for us. Otherwise we wouldn't be able to handle such a large scale. Um, okay, so let's wrap on the infrastructure and let's move up a layer to the control plane. So what is control plane? The control plane. We internally in the team call it like a nerve center for for Pega Cloud. It's what automates and manages all the underlying infrastructure. It ensures reliability, consistency, but also allows Pega Cloud to operate at that huge scale.

And to be honest, I wasn't sure what to pick for this presentation. For the control plane, we have dozens of control plane services, so I focus on the four core ones, which I believe are most important for the control plane perspective, which is provisioning service, internal data plane service catalog and orchestration service. So maybe first up, provisioning service. This is a doer. Um, when a new environment, for example, needs to be spun up or some component needs to be updated, this service takes the request, takes the instruction, and simply makes them happen. Um, I think it's no surprise for all of you that we use infrastructure. Industry standard infrastructure as code tools like you see on the screen, Terraform and Helm charts. But also worth mentioning here. At this level of abstraction, we also gather provisioning statistics.

So we constantly track all the operations and we improve over time on that. Uh, next the internal data plane. And again this is like master inventory or system of record for Pega Cloud. It's, uh, this is where we keep the instances, uh, the deployments of our services across the fleet. So I don't know, we have a Kubernetes cluster, some database, some some environment. What is important is not only is like keeping the instances, but all the parameters and all the configurations. Why I'm saying is that it's not it's not vital actually, it is vital for our operations team, but not only because of the automated recovery processes, but also simply from the operational engineer to know what is deployed, where, with what configuration. Everything, uh, that that we store is actually reflecting our infrastructure. Uh, then we have service catalog and again, it's like an artifactory for our internal artifacts, for all the components that are deployable on the Pega Cloud.

It's natively integrates with, uh, security scanner. So we constantly keep scanning of all the artifacts. We, uh, we keep there, we check for vulnerabilities. And what is also important, and I think is unusual in the industry, is that we keep the dependency analyzer. So we know the component and we know all the dependencies. So when we update it, we exactly know which component might be affected by this change. And finally, the orchestration service that is tying all those three services together. And it's truly a conductor of orchestra. It takes the request, for example I don't know provisioning environment, uh, new Pega environment for client X or update Kafka cluster Y to version Z consults the service catalog for the right components and its dependencies.

Uh, checks the internal data plane, current current status, current configuration of this component, and then instructs the provisioning service what to deploy and in which order. It manages the entire provisioning order and controls the infrastructure versioning across our fleet. So that's all about the control control plane itself. We just talked about four key services which you see provisioning service again internal data plane service catalog and orchestration service. All of them are working together. They're working on cooperation like in microservices approach. They provide the powerful automation and intelligence that allows Pega Cloud to be not only scalable, but consistently managed. So Dave does a lot of sophisticated automation, isn't it? Maybe you can take us up to the management plane and show how our clients, but also operations teams interact with it?

Will do. Camille. Fantastic job explaining some of the intricacies of the control plane. That level of automation is really what makes Pega Cloud the best place to run an application. And now that we are moving up to the top layer, we're going to start covering the management plane. This is where you we we manage the Pega Cloud service, whether it be from a client self-service perspective or from our operations teams. And the management plane actually consists of several different key components and tools for our clients. You might be familiar with my Pega Cloud, which is your primary self-service portal. You also have my support portal where you can enter questions or incidents.

We also have the Pega Diagnostics Center for monitoring the health of your application, the deployment manager for managing the CI CD pipelines of your app. And then we also have some internal applications like the Cloud commercial service, which helps us handle the sizing of environments to meet business requirements, as well as the Global Operations Center or the GOC, as we call it internally. I'm going to talk a bit more about that one a little bit too, but let's focus on my Pega Cloud for a moment. This is all about providing a seamless self-service experience. It's a one stop solution where you could restart environments, download log files, view scheduled maintenance, manage IP, allow lists and much more. It empowers you to manage your environments effectively and as you can see in the screenshot, we're even integrating GenAI directly into the experience with a Pega Cloud buddy. Now with that, you can ask questions in a natural language like how do I enable Pega GenAI in my subscription and get an instant context aware answer? So maybe, Dave, let's make a pause here, because I think MPC, which you know, is built on Infinity, right? So question to you, this Pega Cloud buddy, is it using this GenAI gateway and other components which we saw way earlier in the infrastructure layer?

Yeah. Spot on. Kamil. So this is a perfect example of how some of those foundational AI capabilities in our infrastructure. Our surfaced up through the management plane to provide direct value to you. And in this case, it's helping get answers to questions, questions and issues resolved fast. And a lot of our management plane is made up of Pega applications. It's a common thread. You'd see.

It all just connects. So you know, another thing you might do from my Pega Cloud is actually initiate an update to Infinity. And you can do that yourself right from this portal. So we can actually see where that request goes. So I can pull back the cover. And from there you can actually see this in the Global Operations Center or GOC. And this is our cloud management center. It is the intelligent business aware engine of Pega Cloud. And it is a Pega application.

And this screenshot gives you a glimpse of the way that we manage our fleet and in this case perform an update. You'll see that every single task is part of that update process is called out. We have things like dry run, create, catalog entry, conflict checks, syncs, updates. Every single one of these is fully automated, every single one is fully auditable, and every single one is fully traceable. And this is how we actually ensure we have smooth operations, because once the request to perform an update is made, the GOC can take it from there. And this is how we do it. And the GOC, coupled with the automation, the control plane, is what actually allows us to deliver on another one of the promises of Pega Cloud's continuous infrastructure updates. And we are constantly rolling out enhancements, new features, performance improvements, security patches across our entire fleet. It is a requirement in modern business.

And remember that stat from the Pega Cloud resume one update every 20 minutes 24 over seven 365. This is how we do it through carefully managed automated rollout out that's meticulously tracked in the GOC. That is the power of the architecture of Pega Cloud. So there we have it. We've journeyed from the foundational infrastructure layer powered by Kubernetes and our cloud providers, up through the intelligent control plane that automates and controls all of it. And finally, to the management plane that provides an interface for our users and operations teams. And as always with security and observability is core tenets. You've seen how that that impacts all the layers. Now hopefully this deep dive gives you a much clearer picture of how all these pieces fit together and how we deliver a robust Pega cloud service offering.

And earlier, I mentioned the core foundational principles that helped get us to cloud three. And I just wanted to share some of them here with you. First, we keep it simple. We aim for predictable solutions. We break down complexity into manageable Microservices and we automate relentlessly for scale. Second, it's business driven. Our pragmatic architecture must be highly available, durable, and must be aligned to meet fluctuating demands. And critically, it has to be aligned to the products and capabilities that we offer and our clients you need. And third, it needs to be well-architected.

We embrace Pega's famous Build for Change philosophy even with our cloud services. So we ensure we have end to end agency and responsibility with our teams. We ensure we're organizationally aligned to deliver and maintain this robust architecture. And that brings us to the end of the deep dive. So thank you all so much for your time and attention today. We hope you found this look under the hood of Pega Cloud to be informative, and that you're as excited about its power and scalability and flexibility as we are. Camille. Any final thoughts? Just a big thank you for everyone for joining us in today's session.

Enjoy the rest of the Pega Awards. And I think we have a couple of more minutes for Q&A. Right? Absolutely. So yeah, we're going to transition to Q&A. There's mikes up at the aisles. So if you have any questions please feel free. Thank you all. Thank you.

Hi there. Hey quick clarifying question. You mentioned that, uh, Pega Cloud provides three GenAI capabilities or Cloud Cloud options. When do you use one versus one or the other? I'm sorry. Can you repeat that? Can you go three Cloud three GenAI Cloud, um, capabilities that Cloud Pega Cloud provides or is it is it always all three or is it depending on which cloud deployment we go with. Oh, you mean from the backing services picture. So that doesn't necessarily reflect the independent capabilities that we we service and we offer.

Pega offers a number of of really unique and awesome GenAI capabilities. Those are three of the services that we have that facilitate it. So those services are actually used for all of the GenAI capabilities that we offer as a part of the Pega cloud service. I think probably I'll twist that question a little bit. Sure. So since you have open AI, anthropic, all those models, if as a customer, I want to say only I want to use only OpenAI or vice versa, can we limit to that or because since it's all back end to you guys. Yes. No, we are using it. So great.

Great question. Yeah, that's a very good question. So yeah you're right. Like it's not like you get everything we can limit to very specific vendors of your choice. Uh, so it's up to the, your subscription, your subscription type on the Pega Cloud what you get enabled, right. And you need to think about our architecture like a pluggable components. So if you, for example, have GenAI enabled then you have backing services and then you enable it with a vendor of your choice. For example, can you can you go. Yeah.

Please go to the mic this one or another. What if you're using a Pega Cloud and Amazon and your database goes down as you say right. How do how does it work? It goes down and right away. There's no impact with the customers and getting data right away or there's a downtime or. Great question. So we deploy synchronous replication across availability zones within a region. So the loss of any two availability zones there will always be at least one more that has a synchronous replica of the database, and it will automatically and instantly failover to that working replica so that that's built in out of the box. It's a zero RPO RTO.

Okay. Second one, you talk about data in transit encryption and data in Rest encryption. Transit probably is TLS 1.2 or 1.3 in Rest. What do you have the key in KMS or how do you encrypt it? Yeah. Good question and good guess. Every database for a single environment gets encrypted with separate encryption key, which depends on the cloud provider you are running. If you are running on AWS, yes, it's a separate KMS key for every environment. Thank you.

Yeah. Do you have any plans in your product roadmap to to move Blueprint into a more secure environment. Like right now, you have it on the web. Like, I could go pega.com Blueprint and and create stuff. This morning keynote. The you know, they showed how you could take a video of a COBOL application and put it in there. It which looked great, but in a real world, I don't think any client would want to take an application like that or a video like that that has their data and put it in the in the publicly available Blueprint. Right. But if the same thing was available in, in, in the secure environment that is provisioned, like how you showed for each customer, like I'd be a lot more comfortable doing that.

Well, I'm very, very glad you asked this question because I'm personally also involved in this project. And, uh, Blueprint is a SaaS application, so it's like a as a service offering. Um, and we are working actually, this is pretty much done. Uh, the plan for the Blueprint is to actually integrate with Pega Cloud environments by default. So, uh, it, uh, it's actually follows all the multi-tenant, uh, multi-tenant system requirements. So, for example, uh, if you upload a document, it will, it will be stored actually on your Pega Cloud environment. Okay. What about, um, any plans for having a pcfg specific Blueprint product? Uh, there is a roadmap for that.

We can chat afterwards about that. It is something we've been paying a lot of attention to. Absolutely. All right. A couple more questions. Yeah. One question. So who is Blueprint cloud provider? So Pega has Pega on cloud, right?

If we pick AWS, where is your Blueprint running? So the core Blueprint code, where does it run. So it's as I said, as a service, as a is as a service experience. Right? And we run it on Pega Cloud obviously. So it's like any other on the backbone infrastructure is to Pega Cloud. And we run it on cloud provider of our choice at the moment. We leverage the the AWS the most. Okay.

So Blueprint is tied to AWS. So we actually. Go to our choice of cloud provider. So when you generate a Blueprint, the Blueprint application itself is running on Pega cloud on AWS. From there we actually utilize our same out of the box GenAI capabilities. And we've actually tried pointing it to different cloud providers, because that's actually something that we try and make sure that you could do with your applications as well, right? It's a flexible architecture. When you generate a Blueprint, though, and you want to deploy that as an application, it's actually your choice whether you want to deploy that on Pega Cloud on AWS or Pega Cloud GCP, so that that ends up being within your account, much like the architecture pictures you saw earlier, it would be in your own dedicated VPC and you'd have a route to live for that application. Understood.

Thank you. Yep. I know those two topics. We did not cover security and observability, but my specific question on observability is if I want to have my own management tools to look into the logs and whatnot for various other back end processes we're doing, is a capability available through API or something? Yes. Very good, very good question. I like the level of technical details you asked. Yes. In Pega Cloud you can configure, by the way, also through the MPC portal, uh, the way you want to, we want to, for example, you ask the questions about the logs.

Right. So yes, Pega Cloud is capable of pushing the logs to your security solution of your choice. So we have a lot of integrations for for like a low level or maybe sysops admin task that allows you to to also ingest the logs and other metadata in the real time. On the security side, I have two questions. One is so by default is available, I'm assuming for the corporate identity access management, especially. For for your. Application. Yeah. You you can configure your own IDPs.

So yes you can configure. So in accordance with your own business requirements okay. So if I want to bring the back end data, especially data from on prem or from my other cloud providers can integrate with Pega. How do I securely do the tunneling. Do you guys like like since it's a VPC, I'm assuming the tenant is owned by Pega. Yeah. How do I go to my tenant in AWS, which is kind of like, how do I do the back end tunneling. So that way. Great.

Great question. So we have a team global services organization that handles the migrations onto Pega Cloud. And they can walk through that with you. And they'll actually it'll be a more detailed engagement because they'll want to understand where your data currently resides. What security policies do you have in place? What's the best way to migrate it? They do this a lot, so they're very experienced at it and maybe we can catch up after this session. I'm happy to connect you to our GSA team okay. All right.

Any other questions? Yeah. Go for it. If you could please say it to the microphone. So yeah, it's recorded. Um. I can repeat it for you if you want. In one of the earlier slides, uh, you referenced enhanced Docker. Can you give a little more detail on that.

Um, yeah. Do you want me to cover that? So, uh, so, uh, enhanced Docker is a multi-region disaster recovery solution. Um, it's something that we've, uh, we've recently launched and it's available, and it is, uh, basically a solution where we will deploy a second, uh, infrastructure in a second region. Uh, we have paired regions around the world. And in the event of a disaster, we can failover to that secondary region. If the primary, for example, is just not going to be coming back up. Is that available in AWS and GCP? It's available on AWS today.

Uh, GCP. Uh, we should catch up if that's something that you are interested in okay. Um, and one more question. Um, is there a listing of the paired regions for AWS? Absolutely. Uh, it's on our website. And I'm sure if you searched for enhanced disaster recovery, you'll find it. If not, feel free to reach out to me and I can point you in the right direction. My question is around logs.

So if you have multiple parts, what options we have available to view all the logs. To view all the logs. Do you want to cover logs? So the most simplest and straightforward one is to log into your MPC instance, which is like a self-service portal. And you can actually download logs from from that particular place. If you are interested in log streaming capability, that's also something that can be enabled for your infrastructure. It requires more partnership with you, but we can stream the logs to to to your destination. Okay. Any other questions?

One more. It's a pretty simple one. Do you have plans to offer a Pega Cloud on Azure in addition to AWS and GCP. Not at this time. Plans can always change down the road, but at this point we're we're sticking with AWS and GCP. Got it. Thank you. Yep. I just want to understand the Pega Cloud as this is like PCI compliant and other compliance perspectives it supports like.

Yes. Yes. So uh, feel free to Google for Pega Cloud Trust Center. We have all the certifications and compliance uh listed there. So we have plenty of them. It's like 16 plus. Yeah. Pega.com trust. Exactly.

And they're across industries. You can go and take a look and see what we offer. Um, as Camille mentioned earlier, the because of the way that we've architected our cloud, while the individual contract might specifically specify something like HIPAA or PCI or ISO 27,001, whatever the business requirements are. The architecture is actually the same from our end. That's why we focus on the end to end encryption. Everything's encrypted at rest and in transit. And the security standards are common across the architecture. When the data most of the applications when the data is resides on premise. Right.

Like the customer data resides on premise. So during the transition, apart from the security, uh, TLS 1.2 or something, anything else has to be taken care. Like for the compliance perspective. So from a compliance perspective, we handle everything within the Pega Cloud boundary that's covered based on your business needs and business requirements. We also offer things like private connectivity and things like, um, access control lists effectively, uh, for access into your systems. You could deny it to the public internet only allow it from on premise. Again, it varies significantly by customer and by business need. What you want your application to to really serve. But yes, that is absolutely handled.

And for things like private connectivity, that's something we work with you on to figure out what's the right solution for you. And the Pega Cloud secure connectivity is actually validated. We have clients running across many different industries, including financial services. I think was the one you were asking about. Thank you. Sure. All right. Cool. Well, thank you all very much.

Thank you. Very happy to be here for PegaWorld. And we'll also both be down in the expo hall in the Pega as a service area. So if you have any other questions, feel free to come on down and you can find either of us or any of the other people we work with, and be happy to answer any more questions. Thank you very much. Thank you.

Weiteres Informationsmaterial

Produkt

App-Design völlig neu gedacht

Optimieren Sie mit Pega GenAI Blueprint™ blitzschnell Ihr Workflow-Design. Legen Sie Ihre Vision fest und erleben Sie, wie Ihr Workflow umgehend erstellt wird.

Weiterempfehlen Über X teilen Über LinkedIn teilen Copying...