PegaWorld | 42:00
PegaWorld 2025: Unleashing the Power of Pega Cloud: A Deep Dive into our Scalable Architecture built on Kubernetes
Discover why Pega Cloud stands out as the premier choice for running and operating Pega applications. In this deep dive session, explore the robust architecture of Pega Cloud, designed to seamlessly scale with any business needs. Learn how Pega Cloud leverages global expertise to deliver unmatched performance and reliability, ensuring your business stays ahead in a competitive landscape. Join us to see how Pega Cloud can transform your enterprise operations with its cutting-edge capabilities.
Welcome to Pega World. My name is Dave Casavant, and we're going to be talking over the next 30 minutes or so on a deep dive of Pega Cloud, the architecture, and specifically how we've architected it to be incredibly scalable and resilient using tools like Kubernetes. Now, to kick things off, we're going to start taking a look at the Pega cloud resume. And you might have seen this before. This gives you a snapshot of what Pega Cloud delivers. Today we're talking 36 global regions over 16 compliance certifications and more than 30,000 compute instances in our fleet. And we also roll out updates to our fleet, and we average more than one service update every 20 minutes, which is pretty impressive, right? Yeah, it is for sure. And Dave, a quick question, like looking at these stats, uh, things like layered security, multi-zone high availability or maybe running on both AWS and GCP.
What stands out to you as most, uh, as a key enabler from architecture perspective for achieving this? Definitely, Camille. I mean, security, availability, choice. I mean, that that enables us to meet the business needs of our clients. And that is huge. And as part of this, we're going to be showing some of these architectural principles that help establish the core foundation that help get us there. And that's we're going to do over the next 30 minutes. We're going to talk about how we've kept our solutions simple. We focused on the success of our clients through the value of their business, and we executed architectural standards and pragmatic architecture that allowed us to, among other things, built for change.
So before I really dive too much further into this, I wanted to properly introduce us. So my name is Dave Casavant. I'm senior director of cloud architecture. I'm also responsible for our quality and performance organizations within the cloud. And with me is Camille Dudek, and Camille is a fellow cloud architect focusing on Gen AI and backing services. Camille, it's great to be here with you. Thanks, Dave. I'm excited to be here and talk to you. What we've been building so far.
Awesome. So before we get into the current state of cloud three, I also want to look back a little bit and tell you about how we got there. So Peg has been on this as a service journey for a little while now and back in 2011, we launched Pega Cloud version one, and that was really focused on improving ROI over client managed software, nailing down security, reliability, integrations, things like that. And if you are Pega world, back then you might have heard that Pega Cloud is the best and fastest way to get your applications to market and start realizing ROI. And while our presentation templates have changed, our architecture has evolved over time. That North Star still remains. That's the goal of Pega Cloud. Then in 2015, cloud two came along and that's where we started containerizing our infinity deployments and really honing in on operational consistency and automation across our fleet. Docker came out in 2013, and we were really eager to jump on that as a way of really accelerating our our cloud using that standardized container tool.
And that brings us to 2023 and the launch of cloud version three, which is built on container orchestration and microservices. That enables us to even further improve on the execution of our cloud service offering. You like to joke that some people make sourdough and learned that during the pandemic, this was our pandemic project. Good one. So cloud three is really a significant leap forward for us. We're talking massively improved scalability thanks to technologies like auto scaling and fully automation, and we've enhanced fault tolerance so services can autonomously recover in the event of failure. And it's all automatic and fast. We can even do independent service updates, which means faster delivery. And we can deliver on functional improvements, fixes and security patches.
And it's all built to be future proof ready for things like multi-cloud gen AI, enhanced disaster recovery, or whatever else comes next. We built this architecture to evolve over time, and it's all delivered with state of the art automation. Now we've built an amazing modern microservice architecture, and that's allowed us to deliver on these capabilities. Kamil, do you want to start pulling back the layers so you can show us how it's structured? Absolutely, Dave. And just to get back to what you said, that this microservices approach is truly the backbone of cloud three. I witnessed it myself. So let's take a look at the Pega cloud architecture. Bigger picture.
At the high level you will notice the architecture is divided into multiple layers. However, what is important to realize is that security and observability, which you see on the right side of the screen, span across all of these layers and are fundamentally baked into everything we do. So, um, at the bottom we have infrastructure. Uh, this is where your Pega cloud environments actually run, leveraging the best of what our infrastructure partners provide us. Uh, next. Control plane like a mastermind behind it, uh, automates and manages all that infrastructure. And last but not least, management plane. This is actually the place where both our clients, which is you and our own operations teams, Interacts with everything. So we have only 30 minutes.
We won't cover all the details in all those boxes you can see there. So I want us to focus only on the highlighted one. That being said, let's move on to the infrastructure part. Um, our infrastructure provides, uh, secure isolation between client and Pega Cloud. So what you see right now is a single deployment of infrastructure piece, uh, for a single client. Every client on Pega Cloud gets its own dedicated and fully isolated VPC. Um, you in that VPC, you have, like, I don't know, client your Kubernetes cluster running, but also other components we host for you. Uh, yeah. And you get your instance running in the region and cloud provider of your choice.
So I've been thinking of building a new application utilizing blueprint and a lot of the new gen AI capabilities we've been hearing about this week, something that maybe handles sizing of microservices across our fleet. How do I pick the right region to run that in? Good question Dave. And generally our clients pick the region based on the residency data, residency restrictions or simply proximity of the client base. And speaking of applications and route to life, um, every client project that we run on Pega Cloud by default from the very beginning gets its own dedicated infinity row to life, uh, at minimum, because it may be more of them. It consists of three environments development, staging and production. And as I said, this is what you get out of the box from the very beginning of your journey with Pega Cloud. Um, it allows for safe testing, but also for smooth promotion process for to live deployment, uh, for your business critical solution. Um, so if we pick inside any of these environments, let's say production one, uh, you will find it's composed of different tiers, right?
Uh, for example, we have at minimum web tier for user facing interactions and background tier for processing tasks like, I don't know, job schedulers or queue processors. Um, this is at least the minimum what you get. And maybe more of those years. It solely depends on your Pega cloud subscription model. What is also important here to remember is that each of these tier is scaled independently. Uh, this is to provide your application with the best performance, but also it helps you to meet your SLAs. Now switch to the backing services. And that's totally fair question to me. What backing services.
We use a lot of, uh, Architectural nomenclature. So a backing service in our eyes is an independent deployment of the service that supports infinity directly in its runtime. So, you know, all search and reporting is a very good example of a backing service. But there are some more, especially the one which you can see currently. Those are also considered as backing services because they do support infinity in its runtime with JNI capabilities. And let's move on to the operational services. And I like to think about operational services like a highly automated pit crew for your application for for your infinity environment, because they handle and automate all the all the operational tasks that are ongoing on your on your instance. Let's take for example, infinity upgrades. We have a set of dedicated microservices that are responsible only for having zero downtime upgrades, for instance.
And I can tell you that this automation level, which you see now is absolutely critical for us for Pega cloud to efficiently manage thousands of environments like that and deliver top notch pega service experience for all of them. And last but not least, Magic Infrastructure Services. You are not surprised, I hope already. We build everything on top of Kubernetes. And actually the infrastructure services is something that we built very deeply into the Kubernetes level. So for example, uh, things like traffic encryption, uh, enforcing specific security policies or even, you know, providing out of the box monitoring across that is consistent across all the fleet for our operations teams. Those are the things that we and four very deeply on the Kubernetes level. So can you explain how these might be different based on industry? So for example, if you're in healthcare, maybe you need HIPAA or if you're maybe in the financial services industry, maybe you need PCI.
Well, actually there is no difference. And Dave is my boss. I feel it's a tricky question for me, but our audience should know that, uh, the biggest change we made with Pega Cloud three and I like this change the most, is simplifying our security into a single solution that meets all of our client businesses. So that actually allowed us to take the best security standards across all the industries and apply them as a single standard internal Pega cloud standard, which is our secure by default architecture principle. Awesome. Now thanks, Kamil. So still within that broader infrastructure layer, I want to take us down a little bit deeper into some other crucial parts, starting with the database. And as you can see, each environment gets its own dedicated database and each one is encrypted with a different unique key. And decisions like that help ensure that we have appropriate resource allocation and also isolation.
Speaking of databases, we also treat resiliency and availability very seriously. And the database is synchronously replicated to a standby. And we also have read replicas which can handle read only traffic offloading some of the offloading some of the requests from the primary and improving performance overall. And again, all of this is encrypted both at rest and in transit from the ground up. So wait a second, Dave. So let's assume I have an application, right. It's very critical for my business. And what happens with that application when this primary database you see there goes down. Great question.
Resiliency resiliency, resiliency. So each of the databases you see here actually has redundant disks as well. But let's say you lose an entire data center one availability zone. In AWS terms. We can actually proceed as if nothing happened from a from your application perspective, it is not impacted. In fact, we can lose any two availability zones in one of our regions and still maintain application availability. There's no impact. Awesome. So taking this down one more time now let's also talk about file storage for each environment.
And this is used for things like attachments or other file based data. And as mentioned earlier, there are several other key backing technologies that also power various capabilities like Kafka, OpenSearch and Cassandra. And all of this is fully managed by Pega Cloud. It's part of the service. You don't have to worry about any of it. You can focus on the value of your business application, and we ensure data for all of these services is encrypted, both again at rest and in transit, spanning across availability zones and ensuring high availability of all backing services. Now generative AI models. Now these are obviously very important, especially for autonomous enterprises. And we're partnering with leading AI providers to provide the best in class large language models that your applications can use from an architectural point of view.
Integrating with these powerful but external AI services securely and ensuring low latency and managing evolving capabilities, it must present some really interesting challenges. So how do we ensure that what we have is future proofed? Well, great question Dave. And you are right. This is certainly a dynamic space. So we have no other option. We focus on abstraction layers, like for example, generic gateway, right. Which you see a couple slides before. Um, and also I guess security.
Security is very crucial here. Uh, we focus a lot on secure connectivity to those models, as well as on very strict data handling policies. And it's all about building for change, building for flexibility to adopt all and leverage all what our partners provide us. And I can tell you it's changing like I don't know two every every every three weeks. And it makes this project super exciting for me. That's awesome. And there's so many new gen AI capabilities that we're hearing about this week. So would you like to take us back to the infrastructure layer? Absolutely, Dave.
So just to quickly recap the infrastructure components we went over, uh, we covered client VPC, we covered Kubernetes clusters and all the services there. a highly available database file? Storage core backing technologies and most importantly, the JNI, our JNI partner integration. And just to give you the sense of the scale, we operate hundreds of stacks like that across 36 different regions, both AWS and GCP. And that makes us cloud architects to think about every single decision we make, to think about scale of our business, and be very careful with all the decisions we are making. Everything must be automated from the very beginning. That's a fundamental architectural rule for us. Otherwise we wouldn't be able to handle such a large scale. Um, okay, so let's wrap on the infrastructure and let's move up a layer to the control plane.
So what is control plane? The control plane. We internally in the team call it like a nerve center for for pega cloud. It's what automates and manages all the underlying infrastructure. It ensures reliability, consistency, but also allows Pega Cloud to operate at that huge scale. And to be honest, I wasn't sure what to pick for this presentation. For the control plane, we have dozens of control plane services, so I focus on the four core ones, which I believe are most important for the control plane perspective, which is provisioning service, internal Dataplane service catalog and orchestration service. So maybe first up, provisioning service. Uh, this is a doer.
Um, when a new environment, for example, needs to be spun up or some component needs to be updated, this service takes the request, takes the instruction, and simply makes them happen. Um, I think it's no surprise for all of you that we use infrastructure industry standard infrastructure as code tools like you see on the screen, Terraform and Helm charts. But also worth mentioning here. At this level of abstraction, we also gather provisioning statistics. So we constantly track all the operations and we improve over time on that. Uh, next, the internal data plane. And again this is like master inventory or system of record for pega cloud. It's uh, this is where we keep the instances, uh, the deployments of our services across the fleet. So I don't know, we have a Kubernetes cluster, some database, some some environment.
What is important is not only is like keeping the instances, but all the parameters and all the configurations. Why I'm saying is that it's not it's not vital actually, it is vital for our operations team, but not only because of the automated recovery processes, but also simply from the operational engineer to know what is deployed, where, with what configuration. Everything, uh, that that we store is actually reflecting our infrastructure. Uh, then we have service catalog and again, it's like an artifactory for our internal artifacts, for all the components that are deployable on the Pega cloud. It's natively integrates with security scanner. So we constantly keep scanning of all the artifacts we, uh, we keep there. We check for the vulnerabilities and what is also important. And I think it's unusual in the industry, is like we keep a dependency analyzer so we know the component and we know all the dependencies. So when we update it, we exactly know which component might be affected by this change.
And finally, the orchestration service that is tying all those three services together. Um, and it's truly a conductor of orchestra. It takes the request, for example, I don't know, provisioning environment, uh, new pega environment for client X or update Kafka cluster Y to version Z consults the service catalog for the right components and its dependencies. Uh, checks the internal data plane, current, uh, current status, current configuration of this component, and then instructs the provisioning service what to deploy and in which order. It manages the entire provisioning order and controls the infrastructure versioning across our fleet. So that's all about the control control plane itself. Uh, we just talked about four key services which you see provisioning service again internal data plane service catalog and orchestration service. All of them are working together and they're working on cooperation like in microservices approach. They provide the powerful automation and intelligence that allows Pega cloud to be not only scalable but consistently managed.
So Dave does a lot of sophisticated automation, isn't it? Uh, maybe you can take us up to the management plane and show how our clients, but also operation teams, interact with it. We'll do Camille and fantastic job explaining some of the intricacies of the control plane. That level of automation is really what makes Pega Cloud the best place to run an application. And now that we are moving up to the top layer, we're going to start covering the management plane. This is where you we we manage the Pega Cloud service, whether it be from a client cell service perspective or from our operations teams. And the management plane actually consists of several different key components and tools for our clients. You might be familiar with my Pega Cloud, which is your primary self-service portal. You also have my support portal where you can enter questions or incidents.
Uh, we also have the Pega Diagnostic Center for monitoring the health of your application, the deployment manager for managing the CI CD pipelines of your app. And then we also have some internal applications like the cloud commercial service, which helps us handle the sizing of environments to meet business requirements, as well as the global operations center or the gawk, as we call it internally. I'm going to talk a bit more about that one a little bit too, but let's focus on my pega cloud for a moment. This is all about providing a seamless self-service experience. It's a one stop solution where you could restart environments, download log files, view scheduled maintenance, manage IP, allow lists and much more. It empowers you to manage your environments effectively and as you can see in the screenshot, we're even integrating Genai directly into the experience with a Pega cloud buddy. Now with that, you can ask questions in a natural language like how do I enable Pega AI in my subscription and get an instant context aware answer? So maybe, Dave, let's make a pause here, because I think MPC, which you know, is built on infinity, right? So question to you, this pega cloud, buddy, is it using this JNI gateway and other components which we saw way earlier in the infrastructure layer.
Spot on Camille. So this is a perfect example of how some of those foundational AI capabilities in our infrastructure. Our surfaced up through the management plane to provide direct value to you. And in this case, it's helping get answers to questions, questions and issues resolved fast. And a lot of our management plane is made up of Pega applications. It's a common thread. You'd see. It all just connects. So you know, another thing you might do from my Pega cloud is actually initiate an update to infinity.
And you can do that yourself right from this portal. So we can actually see where that request goes. So I can pull back the cover. And from there you can actually see this in the Global Operations Center or GOC. And this is our cloud management center. It is the intelligent business aware engine of Pega Cloud. And it is a Pega application. And this screenshot gives you a glimpse of the way that we manage our fleet. And in this case perform an update.
You'll see that every single task is part of that update. Process is called out. We have things like dry run, create catalog entry. Conflict checks. Syncs. Updates. Every single one of these is fully automated. Every single one is fully auditable. And every single one is fully traceable.
And this is how we actually ensure we have smooth operations. Because once the request to perform an update is made, the GOC can take it from there. And this is how we do it. And the GOC, coupled with the automation, the control plane is what actually allows us to deliver on another one of the promises of Pega Cloud's continuous infrastructure updates. And we are constantly rolling out enhancements, new features, performance improvements, security patches across our entire fleet. It is a requirement in modern business. And remember that stat from the Pega Cloud resume one update every 20 minutes 24 over seven 365. This is how we do it through carefully managed automated rollout that's meticulously tracked in the GOC. That is the power of the architecture of Pega Cloud.
So there we have it. We've journeyed from the foundational infrastructure layer powered by Kubernetes and our cloud providers, up through the intelligent control plane that automates and controls all of it. And finally, to the management plane that provides an interface for our users and operations teams. And as always, with security and observability as core tenets, you've seen how that that impacts all the layers. Now hopefully this deep dive gives you a much clearer picture of how all these pieces fit together and how we deliver a robust Pega Cloud service offering. And earlier, I mentioned the core foundational principles that helped get us to cloud three, and I just wanted to share some of them here with you. First, we keep it simple. We aim for predictable solutions. We break down complexity into manageable Microservices and we automate relentlessly for scale.
Second, it's business driven. Our pragmatic architecture must be highly available, durable, and must be aligned to meet fluctuating demands. And critically, it has to be aligned to the products and capabilities that we offer and our clients you need. And third, it needs to be well architected. We embrace Pega's famous build for change philosophy even with our cloud services, so we ensure we have end to end agency and responsibility with our teams. We ensure we're organizationally aligned to deliver and maintain this robust architecture. And that brings us to the end of the deep dive. So thank you all so much for your time and attention today. We hope you found this look under the hood of Pega Cloud to be informative, and that you're as excited about its power and scalability and flexibility as we are.
Camille. Any final thoughts? Just a big thank you for everyone for joining us in today's session. Enjoy the rest of the awards. And I think we have a couple of more minutes for Q&A. Right? Absolutely. So yeah, we're going to transition to Q&A. There's mikes up at the aisles.
So if you have any questions please feel free. Thank you all. Thank you. Clarifying question. You mentioned that, uh, Pega Cloud provides three AI capabilities or cloud cloud options. When do you use one versus one or the other? I'm sorry. Can you repeat that? Can you?
Three cloud. Three cloud. Capabilities that cloud Pega cloud provides or is it is it always all three or is it depending on which cloud deployment we go with. Oh, you mean from the backing services picture. So that doesn't necessarily reflect the independent capabilities that we we service and we offer. Pega offers a number of of really unique and awesome AI capabilities. Those are three of the services that we have that facilitate it. So those services are actually used for all of the AI capabilities that we offer as a part of the Pega Cloud service. Probably I'll twist that question a little bit.
Sure. So since you have OpenAI anthropic, all those models, if as a customer, I want to say only I want to use only OpenAI or vice versa, can we limit to that or because since it's all back end to you guys. Yes. So we are using it. So great. Great question. Yeah, that's a very good question. So yeah you're right. Like it's not like you get everything.
We can limit to very specific vendors of your choice. Uh, so it's up to the, your subscription, your subscription type on the Pega cloud. What you get enabled. Right. And you need to think about our architecture like a pluggable components. So if you for example, have enabled then bam you have backing services. And then you enable it with a vendor of your choice. Can you can you go. Yeah.
Please go to the mike this one or another. What if you're using a Pega cloud and Amazon and your database goes down as you say right. How do you how does it work? It goes down and right away. There's no impact with the customers and getting data right away or there's downtime or. Great question. So we deploy synchronous replication across availability zones within a region. So the loss of any two availability zones there will always be at least one more that has a synchronous replica of the database, and it will automatically and instantly failover to that working replica so that that's built in out of the box. It's a zero RPO auto.
Okay. Second one, you talk about data in transit encryption and data in Rest encryption. Transit probably is TLS 1.2 or 1.3 in Rest. What do you have the key in KMS or how do you encrypt it? Yeah. Good question and good guess. Every database for a single environment gets encrypted with separate encryption key, which depends on the cloud provider you are running. If you are running on AWS. Yes, it's a separate KMS key for every environment.
Thank you. Yeah. Do you have any plans in your product roadmap to to move blueprint into a more secure environment. Like right now, you have it on the web. Like, I could go pick a blueprint and and create stuff, right? This morning keynote, the you know, they showed how you could take a video of a COBOL application and put it in there, which looked great. But in the real world, I don't think any client would want to take an application like that or a video like that that has their data and put it in the in the publicly available blueprint. Right. But if the same thing was available in, in, in the secure environment that is provisioned, like how you showed for each customer, like I'd be a lot more comfortable doing that.
Well, I'm very, very glad you asked this question because I'm personally also involved in this project. And, uh, blueprint is a SaaS application. So it's like a as a service offering. Um, and we are working actually, this is pretty much done. Uh, the plan for the blueprint is to actually integrate with pega cloud environments by default. So, uh, it it's actually follows all the multi-tenant, uh, multi-tenant. System requirements. So, for example, uh, if you upload a document, it will, it will. Be stored actually on your cloud environment.
Okay. What about, um, any plans for having a pcfg specific blueprint product? Uh, there is a roadmap that we can chat afterwards about that. Um, it is something we've been paying a lot of attention to. Absolutely. All right. A couple more questions. Yeah. So who is blueprints?
Cloud provider. So Pega has pega en cloud, right. If we pick AWS where is your blueprint running. So the core blueprints code where does it run. So it's, uh, as I said, it's as a service, as a is as a service experience. Right. And we run it on Pega Cloud obviously. So it's like any other on the backbone. The infrastructure is cloud.
And we run it on cloud. Provider of our choice at the moment. We leverage the the AWS the most. Okay. So blueprint is tied to AWS. So we actually go to our choice of cloud provider. So when you generate a blueprint the blueprint application itself is running on Pega cloud on AWS. From there we actually utilize our same out-of-the-box gen AI capabilities. And we've actually tried pointing it to different cloud providers because that's actually something that we try and make sure that you could do with your applications as well, right?
It's a flexible architecture. When you generate a blueprint, though, and you want to deploy that as an application. It's actually your choice whether you want to deploy that on Pega cloud on AWS or GCP. So that ends up being within your account, much like the architecture pictures you saw earlier, it would be in your own dedicated VPC and you'd have a route to lie for that application. Thank you. Yep. I know those two topics. We did not cover security and observability, but my specific question on observability is if I want to have my own management tools to look into the logs and whatnot for various other back end processes we're doing, is a capability available through API or something? Yes.
Very good, very good question. I like the level of technical details you asked. Yes. In Pega cloud you can configure, by the way, also through the portal, uh, the way you want to, we want to for example, you ask the questions about the logs right. So yes cloud is capable of pushing the logs to your security solution of your choice. So we have a lot of integrations for for like a low level or maybe sysops admin task that allows you to to also ingest the logs and other metadata in the real time. On the security side, I have two questions. One is so by default is available, I'm assuming for the corporate identity access management, especially. For for your application.
Yeah. You you can configure your own IDP. So yes you can configure. So um in accordance with your own business requirements okay. So if I want to bring the back end data, especially data from on prem or from other cloud providers and integrate with Pega, how do I securely do the tunneling? Do you guys like like since it's a VPC, I'm assuming the tenant is owned by Pega. Yeah. How do I go to my tenant in AWS, which is kind of like, how do I do the back end tunneling. So that way.
Great. Great question. So we have a team global services organization that handles the migrations onto Pega Cloud. And they can walk through that with you. And they'll actually it'll be a more detailed engagement because they'll want to understand where your data currently resides. What security policies do you have in place? What's the best way to migrate it? They do this a lot, so they're very experienced at it and maybe we can catch up after this session. I'm happy to connect you to our GSA team okay.
All right. Any other questions? Yeah. Go for it. If you could please say it to the microphone. So it's recorded. Um. I can repeat it for you if you want. In one of the earlier slides, uh, you referenced enhanced doctor.
Can you give a little more detail on that. Um. Yeah. Do you want me to cover that? So, uh, so, uh, enhanced Docker is a multi-region disaster recovery solution. Uh, it's something that we've, uh, we've recently launched. It's available. Um, and it is, uh, basically a solution where we will deploy a second, uh, infrastructure in a second region. Uh, we have paired regions around the world, and in the event of a disaster, we can failover to that secondary region.
If the primary, for example, is just not going to be coming back up. Is that available in AWS and GCP? It's available on AWS today GCP. Uh, we should catch up if that's something that you are interested in. Okay. Um, and one more question. Um, is there a listing of the paired regions for AWS? Absolutely. Uh, it's on our website, and I, I'm sure if you searched for it for enhanced disaster recovery, you'll find it.
If not, feel free to reach out to me and I can point you in the right direction. My question is around logs. So if you have multiple parts water options, we have available to view all the logs. To view all the logs. Do you want to cover log. So the most simplest and straightforward one is to log into your MPC instance, which is like a self-service portal. And you can actually download logs from from that particular place. Um, if you are interested in log streaming capability, that's also something that can be enabled for your infrastructure. Uh, it requires more partnership with you, but we can stream the logs to, to to your destination.
Okay. Any other questions? One more. It's a pretty. Simple. One. Do you have plans to offer Pega Cloud on Azure. In addition to AWS and GCP. Not at this time.
Plans can always change down the road, but at this point we're we're sticking with AWS and GCP. Got it. Thank you. Yep. I just want to understand the cloud as this is like PCI compliant and other compliance perspectives. It supports like. Yes yes. So uh feel free to Google for Pega Cloud Trust Center. We have all the certifications and compliance uh listed there.
So we have plenty of them. It's like 16 plus. Yeah. Trust. Exactly. Um, and they're across industries. You can go and take a look and see what we offer. Um, as Camille mentioned earlier, they, because of the way that we've architected our cloud, while the individual contract might specifically specify something like HIPAA or PCI or ISO 27,001, whatever the business requirements are. The architecture is actually the same from our end.
That's why we focus on the end to end encryption. Everything's encrypted at rest and in transit. And the security standards are common across the architecture. Okay. When the data most of the applications. When the data resides on premise. Right. Like the customer data resides on premise. So during the transition, apart from the security, uh, TLS 1.2 or something, anything else has to be taken care like, uh, for the compliance perspective.
So from a compliance perspective, we handle everything within the Pega cloud boundary that's covered. Um, based on your business needs and business requirements, we also offer things like private connectivity, um, and things like, um, access control lists effectively, uh, for access into your systems. You could deny it to the public internet only allow it from on premise. Again, it varies significantly by customer and by business need. What you want your application to to really serve. But yes, that is absolutely handled. And for things like private connectivity, that's something we work with you on to figure out what's the right solution for you. In the Pega cloud, secure connectivity is actually validated. We have clients running across many different industries, including financial services.
I think was the one you were asking about. Thank you. Sure. All right. Cool. Well, thank you all very much. Thank you. Very happy to be here for Pega World. And we'll also both be down in the Expo hall in the Pega as a service area.
So if you have any other questions, feel free to come on down and you can find either of us or any of the other people we work with, and be happy to answer any more questions. Thank you very much. Thank you.
Related Resource
Product
App design, revolutionizedOptimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.