PegaWorld | 50:15
PegaWorld 2025: Pega Platform Modernisation
As organizations embrace digital transformation, modernizing enterprise applications while ensuring business continuity is crucial. Our Pega upgrade journey transitions from Pega 8.x to Pega Infinity 24.1.1, leveraging a containerized architecture on OpenShift. This transformation enhances scalability, performance, and compliance, aligning with industry standards and future-readiness. Key advancements include:
- Containerization & OpenShift Migration: Adopting a Kubernetes-based deployment model for agility and cloud compatibility.
- Externalizing Kafka & Elasticsearch: Decoupling messaging and search services for resilience and scalability.
- JVM-Based Platform Optimization: Ensuring seamless execution within an enterprise-grade on-premises cloud environment.
- Zero Business Disruption: Implementing a structured migration strategy for uninterrupted operations.
By adopting a loosely coupled, cloud-ready architecture, we unlock AI-driven automation, improved performance, and long-term sustainability.
PegaWorld 2025: Pega Platform Modernisation
So this is Sumit. My name. My name is Sumit. I work for Cisco and I'm here to share our journey. What we made for migrating our Pega platform which was on VM based platform.
And we have migrated successfully from a VM based platform to a container based platform, which is using the OpenShift. And just to be just to take you one year back, we were in the same place like you guys here. And we were searching for the people who have already done that. And and now after one year, we are here to say our journey to how we migrated our Pega Platform from a legacy VM based platform to a container platform. So let me take you to to the journey what we have gone through and we will be I will be sharing you all the challenges, all the processes we went through to achieve this and hopefully that might our journey, our experience will help you guys to achieve the same.
So yeah, as we start. So we have this uh, so we have gone through many of the changes. So first thing was how we have decided okay, what what what would be our platform if we have to move out of Pega 8.8 version, which was supported in eight on VM platform. But since we had a time to move all of our applications from a Pega 8.8 version to Pega Infinity version, which require a container platform, whether it's Pega Cloud or some of the recommended Kubernetes platform. Pega Pega is recommending, which we we can opt for.
So this is what we have gone through. And I'll take you to the next slide. So so you and what our footprint or how big our application platform was in Cisco and what, what it took us to take take those things to migration and achieve the migration journey. So if you see this is our business domain in Cisco across which we have a different application, different areas like sales, logistics, WebEx, finance and all across these business domains. In Cisco we have around like 30 plus business applications running there.
And and those are very critical to Cisco and direct dollar. Uh, yeah. So it was so 30 business applications and it was complex as well. So and there were a lot of lot of customization were done for these applications. And these applications were they're running on a VM based platform for almost ten plus years.
So we have to decide, and we have to come with a very strong strategic plan to move all these business applications to a Pega container based platform. Considering with the limitations and the compliance and data related constraints we have within Cisco policies because of the Cisco policies, and to consider all those things, we have decided to move all these applications on a OpenShift container based platform tool. So, and the next thing is how we have planned our journey. What caused it and why we have what we have achieved and what by when we have we have we have to achieve that, right? So all these things we have planned, along with my all, all teammates, I have planned that, we have planned that. And obviously so what was driving our journey was obviously we were all all of our application were on the Pega 8.8 version.
And since Pega eight version was being. End of support for this. This year, October, we had the challenge to move all of our 30 plus applications within a year so that within a year time frame, which was a tedious job for us, but that we have achieved using our, uh, obviously with my teammates and then then. Yeah. So and the major change was for us to transform all the legacy system and all the, uh, platform, all the maintenance related overhead, what we had, we had to move and achieve that through the proper planning and the guidance. Yeah. So what? And and then our target was to achieve this by this India like as I said, it is within one year to before the end of support of Pega 8.8 version. Right. So and and again. So what we have to do it to solve our some of the problems we have been facing with our legacy system.
And we took this opportunity to take our legacy system to a modernized platform and also which could also help us to improve our maintenance capabilities and also the cost side of it, also managing the security and and also moving all of for any Pega upgrades. We had the situation to move this complication or complicated applications. Complicated applications in limited time frame all the time. So we to by doing this we have achieved a lot of uh we have able we were able to solve a lot of pain points and and those are the pain points you can already see here. Like with VM based we have almost like around 70 plus sorry, one 150 VMs to manage patch and all those kind of things we have to do make sure the security make sure the different uh, related software to run the Pega on our VM based platform and always maintain the security and all those things.
So those were the pain points we used to face when we were on VM. And with the containerization platform, we were able to solve those with that. Right. So yeah, and in this containerized platform, when we plan to move our our application to container platform, the things, as I already said, due to the limitation and the compliance and the data security related, Rated R policies with Cisco. Have we decided to move our platform to a OpenShift platform? OpenShift Orchestrator platform? And since our our database was already in Oracle, we don't want to touch anything on that side.
So that that is the another thing which helped us to move this complete migration journey and achieve the migration journey. Without the DB changes, we don't have to do and worry about. Right? And then this obviously this drives us to do a help us to make sure that, okay, real time scaling will be possible and we will be we have to maintain a I mean, our platform, which we have to maintain will be easy to maintain without any much harsh and also not worry about the VM and maintenance about VMs as well. So with that and this is what our the timeline when we last year. As I said, when we last year when we were here and we after that we decided OK unplanned how our timeline would be.
And today I can clearly say, I mean gladly say that we have already moved our we have achieved all these things, and we have moved almost 15 of our critical applications to production without even being a single issue, major issue there. So that was a a great achievement by our team. And that's the so this is the timeline. And we have almost around ten, ten more applications which and which is by May. By June of like 2020 we will be completing the migration for those as well.
So this is just the the highlight of what are the technologies we are using in our container platform and what we have used. And this, this is the important screen. What you see here, how it's a very high level architecture architectural diagram. What you see here is what we had in on prem system. And if you see we had a major thing which is our middle layer where everything the Pega Pega with embedded embedded component like Elasticsearch, Kafka and Hazelcast were running on on prem virtual machines and with the active active data center.
So basically for the high ability. And then if you see here, like we had the Tomcat server on which our platform was running, uh, all the applications were running and, and again, the Tomcat JVM managing the Tomcat JVM for all those servers. Also the Java versions and also those those were the pinpoint pinpoint from the technical point of view what we were facing, and it was hard for us to manage along with the lot of VMs and VM patching, host upgrade and all the OS upgrade and. All right. And if you see here like the third section, which is the Oracle, which is our database, which, which is not changing in this current container. Platform. For us the major change was is happening or we have worked on is the middle part and the first layer. What you see is from the it is just the web proxy layer from where the the user request comes and redirect it to the appropriate applications. So if you the next screen what you see here, what the actual transformation or the migration happened and how the things have moved. So if you see if you notice here the in the in the container platform and with the Pega Infinity version 20, 23 and 24 we have now we have like mostly the latest 24 version is available.
And with that, what has changed in container platform? If you see right the web proxy layer which you had observed was there on the first, now it is all came with the OpenShift within the OpenShift. And if you see here our Pega is now Pega. Infinity version is only having the Hazelcast which is embedded still embedded version component with Pega. Other than that, we have to externalize two of the very important component of Pega with Kafka Elastic Kafka services and Elastic Services. Right? As you know, Kafka is being used by Pega for their queue processing and all, and which becomes very important in a container platform to make sure that it is a very.
It should be a stable and stable cluster. Same goes with the Elasticsearch. All the search and search Pega search or application search is has been, will be enabled through Elasticsearch and the through the one of the very important component of Pega SRS. Yeah. So and then with this this change.
Sorry. Yeah. Sorry. So one thing I would like to mention here, if you see the bottom part, this is where our, uh, in Cisco we were handling or the user access management and the basically the Pega operators through a proper Cisco based access management and through. And we had the integration with the Splunk to manage our logs and obviously to enable the log observability of logs.
And then appdynamics we were we were using to to for the, for for for monitoring the performance of our application and all. So all those remain same. There is no change on that part. So major change as I said, was basically from moving out Elasticsearch and Kafka to as external service and manage that. So this is the major change.
What you can see from the very high level, how the platform has changed. Yeah. So this is basically this slide basically talks about how how how has our transition highlights and what are the components where the what what has changed. Right. So if you see from the platform point of view we had we have to move.
We have moved from VM based on prem VM based to Cisco managed. So the OpenShift version we have used is already already readily available within Cisco and it was being used by multiple other teams. So which helped us to achieve this journey of migration within the time within the one year time frame. Because we don't need to worry about how those how how the OpenShift platform is set up for us. And then the next part is next thing on the platform side was to understand the pega's out of box helm charts, which which was very helpful for us to manage and deploy the deploy the Pega and the its related components very easily on our container based platform, which we have used as a, as OpenShift.
Right. And then again, the the external part is where we have to put some of the sum of our effort to make sure whatever the Pega Pega requirement to run the Kafka to to connect to Kafka. What are the ACLs they need or what how? And they have the what are the topics? What are the rules for the topics? What are the naming conventions for the topics? Kafka topic names. So we had to make sure what we we we have when we are we were looking for the. Since Cisco also have some of the internal team which already provides as a as a internal Kafka services for the team, but due to Pega's specific requirement, we could not use those and we had to like from our side, we had to spin up a Kafka cluster on that, and also to handle the ACLs based on what what Pega needs.
So that is the one part we have to create and manage as an external service. And then the same way we have done the same thing for elastic service search as well, where we have to make sure the SRS is able to connect to the elastic search version. What we are using or what is recommended or available for Cisco. And again, we have to also make sure due to the some of the limitation with SRS comes with that. If you might have noticed, in our older, uh, platform architecture, we were running our application with active active data center, uh active active data center.
So for high availability. So when a but due to the limitations what we have with SRS because what and I would definitely say like when we have contacted Pega support, they have helped us to make sure and understand whatever we are deploying is in correct way. So when, when we, we, we were deploying the Elasticsearch and making sure if we can what kind of high availability we can provide for our applications so they don't need to worry about and they don't have the question, okay. If we are moving from a active, active data centers to a active passive data center should not be any problem for achieving the high availability and application performance side of it. So with the SRS and with Elasticsearch, we made sure that that that is still in very that we are still able to achieve that with a proper integration.
And, and the same thing since earlier, if you might have seen we, we were also using Splunk and Appdynamics in our older platform as well. But due to the container side of the changes, this how we were implementing the Splunk and Appdynamics. Also, we had to move and do some some of the critical changes to make sure we are managing all of our applications same way. All and all the performance related metrics and all everything being captured same way and same same thing goes with the logging side of it. Because when we were moving Pega from the.
Since we know that OK Pega has multiple kind of logs, right? I mean, it writes different kind of logs given saying Pega rules or Pega alert or even the system related logs right along with multiple different varieties of logs. So it becomes very important for us to have a tool like Splunk. Splunk. You might you guys might be knowing that Splunk is one of the Cisco Cisco's product now. And so that is one of the reason why we have used Splunk to for our observability related requirement.
And we and obviously we had some existing uh, Splunk dashboard already in older platforms. So it gives us opportunity for to move I mean for application team to not to do any change on their part. So one of the critical thing for us to make sure that application from application team side, nobody should be I mean it should be very less change. They have to do, uh, during the migration of this, uh, platform journey. Right.
So if. Yeah. So if you go so this is the very important part where I already mentioned some of the key challenges, what we have faced. But this is like a very at very high level. These are the few of the key challenges we have to face and put our head and brainstorm and then make sure that all the things are taken care.
Obviously the first part is the Pega out of box helm charts, deployments. And we have to because it's a it's a very complex system. Pega was giving I mean, obviously Pega as a whole was it's a very complex system from outside. But since Pega is giving this as a out of box, so we it made our work easy to deploy our, our application along with Pega container or even the. Yeah.
So and it helped us to do that right. So and uh so but the key challenge was from this to understand each of the charts. Pega was providing the helm charts. Right. And because we had some of the customization we had to make to make sure that proper configuration is being deployed through the Pega helm chart and all.
So the some, some learnings, some, uh, some, some of the brainstorming we have to do to understand the Pega. And once we have done that, it was very smooth for us to deploy any application on container platform, which, if we would have to do by ourselves by writing some of the Kubernetes config and all would have been taken longer time. So that was the one thing. Other part, if you see the other challenge we had. So in the container and older legacy platform, we have a one URL under one URL.
When I say one URL, basically a from outside world, there is a one domain name which we are using. And under that there are multiple applications. As I showed you earlier, there were 30 plus applications where reachable. Through that though, we were using a different web proxy layer to manage the route between different application. But still it was a it was a pain point for us to if.
Suppose one I mean sometime a one DNS is going. I mean that DNS is going down Every application was was not reachable. So we had to make sure that using this migration opportunity we had made, made sure that each of our application now will be running on a different I mean, they are under their domain specific URLs. And then and also so to manage that we have to do some customization again on the Pega side to make sure that all the application is still be accessible with the same way how it was accessible if avoiding I mean and giving application team no good experience without they were worrying about any changes on their end. So that was the one another thing on that side then obviously, as you say, the third part is where we have to make sure and understand how the SRS, how the Hazelcast and is we will be managing.
Right? And then since I as I said, because of so this this the third part is where which made us and think and and then limitation what we had on that side. So where Hazelcast and SRS cannot work simultaneously with the two data center at the time. So we had to make sure we are deploying a right strategy there and make sure that application availability there is no harm. I mean, there is no loss on application availability side. So that that was the another pain point.
I mean, the challenges, what we have to solve on that, right. And again the third part is again the externalization services, which I have already explained to you guys are like how we have achieved and what so and what are the consideration we have to take. Right? So obviously the Kafka cluster side, we have to make sure that if you move to a container based platform, Pega has a requirement to create like almost. I mean, the time when we were developing, I mean, we were setting up this Kafka cluster. It was like around 50 plus topics.
The Pega out of box topic. You have to have to make at least to at least run Pega related like out of box CPU processors and all. So we have to make sure that we are following the right topic name conventions, and we have to make sure that we are creating those topic at hand before even you start your Pega applications. So those kind of things we have and again there was a ACL related. Like how what kind of group access, how the offset and uh, how the how the offset and the and the will will work and then how it will be managed through the specific groups and all right within the same cluster, how multiple applications can use a different Kafka Kafka topics and all.
So. Right. So those things, those are the challenges we have to face. And we made sure that those are being handled properly. And same thing with Elasticsearch.
Also we have to make sure uh SRS is configured properly within the two data with which can handle any time. Even though the data center we designed as an active active passive data center, we have observed and faced a challenge where if you have the SRS running on one data center and by chance, if you start the other SRS on the other data center, it will conflict and you would would see different like a duplicate set of Elasticsearch data on the Elasticsearch side, I mean application data on Elasticsearch side. So we have to put our head on that side also. And then we had to again revisit our structure there. And we made sure that how we handle it, like we gave the name or SRS the both side of like both data center SRS name.
I mean the cluster name as same so it can handle even though the Elasticsearch side it is creating the elastic indexes for you. So for the both data center, it was creating the same. Since the name is based on the data center was same. So it was able to handle that. So that was the challenge we had faced.
And that is how we solved that on the Elasticsearch side now. And then again, the data data center architecture side like yeah, again, it is the same thing. Why we have to do that. We have as I said, we have to migrate from active active data center to active passive data center. And and the reason behind that is the SRS and the Elasticsearch.
So SRS and the Hazelcast challenge limitations we had. So we had to but we made sure that everything is properly configured. And that is how we handle the data center specific challenges as well. Yeah. And this these are just some of the success and business outcomes of that.
And so with with all these setups and with our all this, uh, what we have achieved is basically we were able to make sure that okay. Yeah. Now the applications are easily scalable. And also it is it reduced our footprint infrastructure footprint as well because earlier we were having we have to maintain like almost uh, 200 including our uh non-prod and products and environment. We were managing around 200 VMs, and on top of those 200 VMs, we were having some some of the softwares which we were licensing.
Right. So, so with that, we were able to reduce that and we achieved like some of the costing side of it. Also we have achieved some of the that right. So now and then again when we move to a container platform, it obviously from the infrastructure point of view, it helped us to manage those very I mean overall performance wise also the applications performance got increased right. I mean, and and also to bring up to bring up the server and the application from a downturn point of view to available point of view, it made it made the things very fast.
So yeah, these are the and the learning learning side if you see right. So here if you see we were able to create a container based platform with Pega support. And it was we have learned, we have done it in very quick time. And and the and it also we since it this platform was uh almost ten plus years old the old legacy platform we got chance and the learning what we had from it. Like we had to again, we made sure all the security side of it were again reviewed based on the latest security policies we had, and also from, uh, the migrations, all the all the testing, like QA testing or integration testing and all we have again revisited for all the application, we made sure it is based on the current application setup and also the application code.
So we made sure we did that right. And all those, those kind of things. Yeah. So if you see that this is basically where we have achieved and made sure that we have made significantly remove our footprint and ultimately we removed, I mean, which helped us to, uh, which which helped us to work on some work on the major. I mean, most concentrate on the application side of work than or spending time on managing the platform and some of the platform things we are doing.
Right. So if you see it and obviously it reduces the I mean from the resources side, also it we have we were able to reduce some of our footprint. So this is what we have we were able to achieve. And if you see the performance side, this is what again at very high level, what we we were able to achieve with this. And then if you see here the future roadmap, what you have uh, what what is what we have with, with this uh, platform and uh, container platform availability.
These are the future things which, which opens us the opportunity for this, this thing to implement. And, uh, and hopefully, yeah, we will be able to achieve that with, with all these latest modernization, uh, platform changes we have done. So with that, uh, yeah. So as I said, with that and as I, as I mentioned earlier, like we have moved 15 of our applications to production already, uh, with this change, uh, with this migration within this migration journey, and this is some of the, uh, appreciation we got from one of our very critical application, not one of within this this team, we have almost seven applications. So and this is their feedback from some of the application from their team.
So and obviously uh, and that help us to that motivate us and do the things for our other clients and, and gives us the confidence to support this platform so that with this, uh, I kind of. Yeah. So this ends and basically I have showed you how our journey was, and now we can, uh, go with any question or answer. I can answer you any questions you guys will have so that. Thank you.
Yes. Um, yeah. So just, uh, very quickly, uh, maybe a multi-part question. All right. Very quickly.
I've got a multi-part question here. Yeah. It's good. All right. So the, um, these sorts of programs can be very complex.
They can be very costly. The sequence of events as you execute can drive down your costs and decrease risk for the business. So a few points, a few questions regarding the implementation, the schedule timeline. Just to clarify for people here how they might plan their, um, migration successfully. Firstly, um, what was the sequence for being able to do the upgrade? For example, did you first do migration of 8.8 to containers? Did you set up a secondary development environment so that you could maintain and continue CI, CD operations on 8.8 while you were migrating to Pega 24? Um, how did you manage that particular process if you did do so? Um, did you set up your Kafka services and split those out before you cut over to Pega 24 and do the migration to Kafka on 8.8.
First, to be able to break down and do your migration kind of step by step? Um, from a containerization perspective, using 8.8 and going into containers first as a first step might actually decrease your overall costs for being able to move out your stepwise, uh, changes to be able to do the migration as well. Can you shed a little bit of light on the sequence that you followed and how it impacted your timeline and your cost? Yeah. So, yeah. So, so, uh, so first thing obviously first three, four months, it was mostly for us to, uh, do a POC kind of thing and see, uh, how the how we can replicate the similar environment, at least from the non from a lower environment perspective, uh, how we can have the similar environment. We have tested it along with all the exploration we have to do to make sure that we we.
And then again, to answer your question. So we were doing it. Uh, so even though so suppose in our side, like some of the applications which we moved, they already have like three, three different variety of development environment, three different variety of stage environment. That is only because they have a different, uh, code deployment. Life cycle.
So they used to have a different. So what we make, how we achieve that like to make sure that okay, they are also confident. So we have created. So suppose one of the dev environment we have migrated and created on a container build platform. They have tested it.
They have made sure that okay all of their code is working there. And then they once they give us the feedback that okay, whatever. Uh, so whatever they, they wanted to try and everything is working good then. And along with that we from our Pega. So we had like, one of my colleague is here.
Vinod. So they, I was mostly working on to design and drive through the from the platform side. And my colleague Azure Pega CIO side were helping them to understand what changes they have to do from the from the Pega point of view, from the application point of view, to make sure that the journey is smooth. So that is the that is how we were. And another thing to from your point of view, I mean like from to answer your question.
So that is first thing we have done. Right. And to make sure that and that is the first thing. And then the next part is to handle the Kafka thing. Right.
So since in our platform, like older platform, we were not using the Kafka or external Elasticsearch, we were still using the internal, uh, Kafka clusters. I mean, basically the Pega embedded Kafka and Elasticsearch as well. So that was not the I mean, the things we need to worry about at least to move from there to here. But we still need to worry about the Elasticsearch of the Elasticsearch indexes. What what we were there already in the even though it is embedded somewhere.
It was creating the indexes, right. So we had to make sure and we analyze on that part like okay, what is the impact on that side. And then again same thing. And then later what we got to know that okay. On the Elasticsearch side though, even though we can move the data but the indexes we still need to create, I mean, and this is again, not only this is with multiple interaction with Pega supports and all.
We came to the conclusion, okay, this is how we approach on the Elasticsearch move side. And also the Kafka was never the problem. As I said, uh, the only thing what application need to do? Make sure that when they move to a new container platform and the Kafka side, all the Q and all should be consumed like Kafka. Queues should be consumed before they migrated here. Right? So that is the next I mean, the, the next thing we have taken care.
So that is the step by step process what we how we took it. And then we made sure that they don't need to do much of the thing on that side, and they don't need to worry about it. Yeah. So roughly how many deployments did it take to get from 8.8 to 24 for the average application? Yeah. So in our side, like if you ask me, there are different kind of applications.
Like some are not very business critical. Some are very business critical critical. Right. And that is for one of that was I was telling like they have two three different environments within the like dev environment, stage environment. And then obviously there is a lot like in this case light becomes very important like load testing environment.
Right. So because we have to make sure that performance wise and everything is being taken care as well. I mean, since it is a new platform application team, one of the very point was application is that will that new platform is going to perform same way with the experience will be same way. Right. So we have to so to answer your question like yeah, I mean uh, so for the critical application.
Yeah. So it is like you can say like two to at least two run, uh, on the dev side to run on the stage side before they, they made sure that, okay, everything is good for production. So that is how we did. And we had some of the other different POCs also POC boxes also you can say the playground boxes for them also to play, especially if they want to bring in some new Pega Infinity related versions. I mean, changes or uh, things.
Right. So we made sure that that is there and then so yeah, so this is for the critical application. But if you ask me like, uh, for a average like normal user base application, it was like mostly from day to stage, stage to production that that is the pipeline we have used. Yeah. Okay.
Yeah. Hi. I have two questions. Yeah. Um, first is, um, whether or not you have been running into vulnerabilities on the external services like Kafka and Elasticsearch, because at our organization, that's like security guys doing a scan and they say like, oh, but this version has vulnerabilities.
If you move up to a higher version, you're running into the version that Pega doesn't formally support. So my first question is whether you have run into that kind of stuff. Yes. So as we were setting it. So as I said, older platform, we did not need to worry about that because we were using the embedded version of Kafka and Elasticsearch.
But for the newer version, Cisco is very strict. I mean, every company is there, but Cisco is also very strict on security side of it. So we had to go through a lot of security reviews for the software we are going to use and how we are going to manage that, right? I mean, how the versions and all the upgrades of those side as well, right? And also the SSL, the ACLs and all they there was multiple reviewed happened. And also the vulnerability thing, we have made sure that all the vulnerable whatever software we have used for that like just to let you know at at the moment we have used the open source version of Confluent Kafka. So it is not still the license, but but there is a process going on to get the license version of it.
But still we made sure that we we do. We have done the all the security scanning and also from the VM side where we were actually hosting this Kafka cluster, the software, what we have used. So different variety of uh, tools we have in Cisco which scan through those. And then we have gone through that, we made sure all the vulnerabilities, what was reported, we fixed it and then only we move to that. Yeah.
Yes. Yes. Third party software? Basically. Yes. Correct.
Even for that? Yeah. Wow. Okay. Yeah. The other question is about you said that you did a POC for three months.
Um, and I. Is that including the externalization? No, no. So the main the time what I was telling is, uh, after the POC. So May 24th is when we started, like, uh, implementing these, uh, uh, services, like deploying the Pega on our, uh, sandboxes or giving started giving application one, one environment. So they can also be familiar and just make sure that their code are good and all.
And they want if they want to implement new changes considering the Pega 24 and all. So yeah. My question is then how how long did you take to run a Platform with all the externalizations and with what size of a team? because we are looking in the same effort. Okay. Yeah.
So I think from the team side of the side, we had around two from the Pega COE side, we have around 2 to 3. Uh Pega. Lsas. Right. And then uh, from the platform side, like from the DevOps side and that side of we, we were around, uh, two, three, three guys.
So around it's like a 5 to 6 people who were mostly working on the technical front, top side, front of view. But we had the management team as well. Right. Okay. For how long was that? Uh, you can see it is like six months maybe.
I mean, just to make reassuring. Yes. Yeah. Thank you. Yeah.
And after that, six months is mostly deploying all applications and all just repeating the same work. And just to make sure if there is complexity and some customization within the application, we are helping them to resolve that. Excellent. Thank you. Yeah.
Thank you. Okay. We have a similar upgrade plan from 8.8 to Pega. Infinity. So you talked about acceleration components like Hazelcast Elasticsearch.
Were there any challenges in terms of, say, the Elasticsearch not being available for a long time? You know, maybe a network issues or whatever similar thing with Kafka. If those are not available, were there any challenges, any special design considerations? So with with with this like we made sure that when we are developing the I mean, we were uh, creating the Kafka cluster or Elasticsearch cluster, we would build it in a way that it should be a multi-data center based. Uh, so if if the availability wise, it should be very available, uh, from any, any scenario where it cannot be, uh, working. Right. And then again, we have to make sure also the data part of the replication of data also being, uh, properly maintained and all.
So that is how we made sure it is always available. But so far, if you ask me. So far we did not see any issue. I mean, since the day we have started our Elasticsearch cluster and our Kafka cluster, we did not face any since we followed all this. So with this approach.
Okay, thanks for that. You didn't have any issues, but did you also consider. Imagine if this reporting service goes down for several hours, what happens to the indexing that was supposed to happen? Yeah it will. So that's what like even if it is. So if your application is down, it will be in the same state of how it will be.
I mean, where the last and then again, Pega is intelligent enough to manage that part. And, and obviously Pega is using this Elasticsearch and Kafka and all for a temporary temporary, I mean, temporary data management thing. right? But ultimately the database side it is Pega is intelligent enough to manage that state wherever last it was, and whenever the service comes up, it automatically sync everything and make sure that everything runs smooth. Okay. Thank you.
Thank you. Yeah. Thank you. Yeah. So since we moved to Pega 8 to 24, any thoughts on moving to Constellation? Anything with that? Yeah.
So we are working on our teams is already working. So we have set up a Constellation service Services. I mean the servers. So it's again it's application specific use case based scenario right. So where application team wants to migrate and do use I mean basically use the Constellation.
So we have given uh you can say a server the container platform container server for application to use that. And so still that that side we have still work in progress. There is. So again, it is, as I said, it is a use case based. I mean from I think, Vinod, if you can answer anything like we have anything, any, any team who is working on the Constellation side of it.
We have. Just created one Constellation app static server. So we have the application team playing around. We just started with that and from COE side also we started building some use cases on that. So our aim is to get all these legacy applications which are already built, um, the app Constellation and migrate.
Uh, and then eventually we will um, um, update all the application teams so they can use it. So we're still in the starting. Yeah. Uh, one more question. Uh, since I believe this upgrade has been like, um, within on premise infrastructure, right.
Uh, any thoughts were given, like why you guys didn't go with the Pega Cloud? Like, you must have done some. Yeah, yeah. So as I said, right. I mean, it is more of a Cisco and it's data policies and all. And again, some of the things one of the things was also since we have like 30 plus applications and the complexity wise, and it's been running from uh, on a VM based platform like almost ten plus years, we were not sure like how directly it will, uh, when we move to, uh, PegaWorld or if we choose, we go and choose PegaWorld.
How these application. What what kind of complex. I mean, like, changes we have to make. So what what we did, uh, based on the Cisco policy and, and obviously, uh, because in this case, because we were on the Oracle database, PegaWorld is, I mean, Pega Cloud is, uh, if we have to move Pega Cloud, uh, the database also, we had to change because they use, uh, yeah, I think, uh, sorry. Yeah.
So that is one of the reason. And so that's why we, we thought, okay, it's better to go one step closer if we decide to go to PegaWorld. I mean, the Pega Cloud. So we and then and again the limitations, what we had, I mean, within the one year time we have to decide all this and also same time move this application to a version which is also supported by Pega. So that that that is the main reason why we have not uh, obviously we did uh, give our thought on that, but giving that given the timeline and the policy Cisco internal policy and we we just wanted to make sure first our applications are modernized and then maybe the next thing what we are going to do with that.
So I believe the major intent was to continue the support of Pega. Yes. Yes. Yep, yep. Data is the reason.
Yes. Correct. Yep. Thank you so much. Yeah.
Thank you..
Related Resource
Produit
Une révolution dans la conception d’applicationsOptimisez et accélérez la conception de workflows, grâce à la puissance de Pega GenAI Blueprint™. Présentez votre vision pour générer votre workflow instantanément.