Skip to main content

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

PegaWorld | 39:38

PegaWorld 2025: Break Free from Legacy: AI-led Mainframe Transformation with AWS & Pega

It’s time to put an end to technical debt. This session will explore how Pega GenAI™ Blueprint redefines process discovery and legacy transformation to design a cloud-ready version of your application. Learn how to reimagine your systems not just re-platform them. See how Pega GenAI Blueprint helps retire legacy systems by generating cloud-native data definitions, streamlining both customer and employee experiences while reducing maintenance costs.

PegaWorld 2025: Break Free from Legacy – AI-Led Mainframe Transformation with AWS & Pega

All right. What's going on, everybody? How's your PegaWorld been so far? All right. Cool, cool. Thank you for joining us. So I'm Matt Healy. I lead go to market for the Pega platform. So I get to think about all things Blueprint AI, workflow automation and how that helps the enterprises who we work with. And I'm Daniel Okrent, I'm a partner solutions architect at AWS, and I specifically work with Pega love it. And we love him for it.

Um, so as you heard Kerim kind of talk about, but very briefly on the main stage, we're doing some really cool things kind of across the board to accelerate legacy See transformation, with Blueprint being a key enabler of that. We are also building out some really strong partnerships with the ecosystem, kind of led by our partnership with AWS and what we're doing to really accelerate mainframe modernization, getting off of those nasty old mainframe systems once and for all.

So while Kerim showed you 15 seconds of what we can do in this session, we're going to dive in really deep into what are we talking about there? What is AWS bring to the table in terms of helping you understand your mainframe applications? How does that integrate into Blueprint to help you bring those forward into new cloud native applications? And how can we help you really jumpstart your transformation with your enterprise? So before we get too deep.

Who here has mainframe systems in your enterprise? Okay, good. I'm glad there would be better things to do in Vegas if you didn't than be. Although this is going to be fun. Um, who here has worked on mainframe systems. Yeah. Rudolph, I'm looking at you. And who here is actively working on mainframe systems? Who is writing COBOL today? Awesome. Cool. Godspeed. Um, so, you know, job security? Yeah. No surprise. You're not alone, right?

Mainframe applications are still proliferated across enterprises globally. There's millions of them out there, and we don't have to go too deep into it. I think we all understand why that's suboptimal. We spend so much investing in mainframe systems, making sure that they operate as they need to. They can operate with the new systems that we're adding around them. Nothing breaks.

And that usually involves bringing in consultants who should be retired or on the beach somewhere, investing in, you know, keeping up with integrations, making sure everything still works. So there's a big IT investment involved in keeping these mainframe systems on running and operational. But there's also a big opportunity cost involved with mainframe systems and legacy systems across the board.

You know, when you have your data, your processes, your experiences shrouded in these systems, which can't really adapt to the needs of customers, employees and processes. Today, you're missing out on efficiency. You're missing out on potential. You know better ways to do customer experience. You're missing out on potential automations that you could be bringing to market. So there's really just a big, obvious cost with mainframe systems and then the hidden cost as well.

But, you know, nobody here is a dummy, right? You've tried to get off mainframe systems. So give me a little bit of the history of mainframe. A quick journey through time, if you will. All right, well, let's start back in the 1950s. So late 1950s. Cobalt is born, right? The specification is released, and its rapid adoption cemented it as the backbone for mission critical systems. The, um. I think we all know, right? COBOL is still in production today. Let's play a quick game on how much COBOL. Raise your hand if you think that there is more than a billion lines of COBOL in production today. Okay, keep your hand raised. If you think that's more than 100 billion. Okay. Keep that up. If you think it's more than 300 billion. 400 billion. 500 billion. 700 billion. Okay. Most. Don't tell me it's a trillion. No, it's not quite, but almost. So, Microfocus, let us study in 2022 that there's estimates in daily use today, anywhere between 775 and 850 billion lines of COBOL.

So, yes, heavily used even to today for something that was the specification was released in 1950s in the 1960s. 60s. Moving right along we get IBM. IBM comes to and brings us systems 360 which becomes three 7390. And ultimately what we know now is IBM Z. We also get some other important applications around this time. So NASA introduced the information management system. At this point they use it on Apollo five program. Um, and that's still heavily used in information systems today.

A couple of which I've gotten to work with them on. Um, and we get also the six. So customer information control systems, which is used for online high volume transaction processing. Moving right along, in the 70s, I think people started realizing, hey, maybe this IBM business is a pretty good money. Uh, and so we get competition, we get new architecture, the risks architecture. But we also get a group of companies called bunch at this time.

And that includes companies like Honeywell who come to play and and look to penetrate this market. But the heavy investment in those early COBOL mainframe applications largely means that all of those workloads were still running on mainframe and, um, or like some clone of IBM. Whereas if you moved into the 1980s moving right along, things start to change just a little bit.

Where you get Unix, you get Unix server, and you start to see some of these non-core workloads like batch processing or reporting start to be hosted on these distributed servers, which was revolutionary at the time. And um, if we kind of move on again to the, to the 90s, we start to actually see some like movement of from these old systems and from the 50s onto other platforms.

So in the 90s Uh, we you get, uh, Burlington Coat Factory notably does a, a re basically a rehosting or replatforming of their, of their system onto a NTT data, which is, I think known as unicorn at the time, Unix system. Notable tech pioneer Burlington Coat Factory. Yes. They are they exactly. Well, that's that's the story of the mainframe, right? It's not it's not not sexy, but it is. It is important. So, uh, once we get to the 2000, however, we start to see more.

I think in the 90s, you get to lift and shift, right? And Micro Focus, even trademarks this term, lift and shift at that point. Um, well, they dropped that trademark in 2000, um, because we started to see much more, I think robust movement where you've got the Air Force is rewriting from COBOL to, to Java. And so lift and shift is no longer the the standard. You start to see some actual refactoring going on. I was about to ask you about that.

So you, you know, in your role with AWS and I know AWS at large, there's tons of programs to help enterprises get off of the mainframe. So what are some of the approaches that are out there right now? Yeah. So, um, well, before you decide what any approach you're going to do, you have to do some evaluation, right? And that's where it all starts. And we'll have to circle back to the what all is included in this evaluation. But you need to understand your application landscape.

From there you can choose a number of. And these are not mutually exclusive approaches to solving that. You've got things such as um uh rehosting it, which is you're simply moving that environment. You've got refactoring where you're actually rewriting this to a modern language like the US, like the Air Force did. Um, or The New York Times did on AWS later, or you've got some replatforming going on, like which Coca-Cola did around that same time period in the 20 tens.

The You've Got Retirement, which I like to call turn it off and see who complains. No, I'm kidding. But ultimately when you do these analysis, there's anywhere between 10 and 30% of your batch workloads haven't run for months or months or years. So you can use things like S3 Glacier storage to provide some immutable storage of that data for compliance reasons, and generally just turn those things off.

Um, so yeah, those are some of the various pieces, but obviously that I skipped over the hardest part, which is the analysis. Yeah. Yeah. No. And obviously tons of approaches, tons of reasons. You can put re in front of anything. And it seems like that's an approach to get off the mainframe. So I'm here to talk about a new approach and a new Re which is around reimagining. So despite sort of the wealth of of approaches and tooling and the history of helping enterprises get off the mainframe.

If you look at some of the research and the actual results that are out there, only about a quarter of transformations are actually successful and realize return on investment at the end of it. And a lot of that is because, you know, these a lot of these approaches traditionally have been long. They've been costly. They've required bringing in consultants upon a consultants upon consultants who come in.

They read through all of your application code, you know, they document all of your legacy mainframe COBOL code, what it does, what are the integrations, you know, what's the business logic that's in there? How does the application work that results in thousands of pages of documents that you then have to sift through and be like, what are we keeping? What are we re- imagining? What are we replatforming whatever it may be.

I've had to bring multiple people out of retirement just to get help on this stuff. Yeah, those people should be golfing. And then you go forward and traditionally, you know, it is good. Like these approaches. You're talking about going from cobalt to Java, right? You get off the mainframe. That's cool. But traditionally, what these approaches are predicated on is, is really rebuilding the same functionality just in a more modern language.

So you really end up leaving a lot of opportunity cost on the table and potential business benefit, because there's no opportunity in these approaches to actually, you know, infuse new automations, rationalize some of the componentry that's in your application, you know, rethink the customer journey.

So what we are sort of bringing to market together, which I'm really excited about, is a rethought approach to mainframe modernization predicated on leveraging AI throughout, first, to very rapidly understand your mainframe applications. And this is where I'm excited to talk about a new tool, which AWS has just brought to market 1 or 2 weeks ago called AWS transform.

Then through Blueprint and a native integration, we have to pull in the analysis from AWS transform, you're able to quickly visualize the processes which were previously trapped in your mainframe systems, and begin to rethink those and reimagine how those should operate for the future.

And then going from Blueprint to Cloud you, you're able to rationalize multiple mainframe applications onto a single cloud native platform that is fully model driven, so it's not going to leave you in the same situation you were, you know, before this modernization with a lot of code that you have to understand that you have to maintain and you're going to have to modernize in 15, 20 years again.

So with that, why don't you tell me a little bit about one of the key pieces of this, which is AWS transform? Absolutely. So AWS transform is the first Agentic AI service intended to transform and modernize legacy applications such as mainframe. It also works with VMware and. Net the there's a number of steps involved. If you go to the next next slide, there's generally there is a couple of key components here. That first one and that second one is the one we glossed over earlier.

But the way AWS transform works is it's built on 19 years of experience at AWS performing these migrations and modernizations. It deploys multiple complex agents in parallel that do handle complex tasks, such as, um, decomposition of your code, looking at dependencies, looking at copy book files, um, your data objects. It's doing validation. Um, it's it's helping you understand, um, if you have, if you've provided all the information based upon is it seeing things that might be missing?

And then it's providing documentation that you get some control over. And so in the code analysis part we're it's doing that like complexity analysis. It's providing you, within the application itself, some ability to for you to point it to your, your source files, and then it is able to gain access to those that information that after you grant it to and you're able to understand high level things such as lines of code, etc., but that's not incredibly useful for a modernization effort.

Um, and so from that point, we go to this document generation, which you can generate high level documentation that's more like a functional specification. Or you can get down to the level where you've got hundreds of COBOL files and you're doing detailed information. It's all persona based. Um, there is flow representation. You know, I was reading through some of these key points here earlier, and it started to remind me of another kind of application that Pega has been heavily talking about.

What's that here? Which is Blueprint. Oh, yeah. Um, but we've got we've got customers such as Toyota North America who is using this and reducing the time it takes for them by 75% to do the analysis and documentation. They are also able to, they said what used to take them months is now take some days. So it's really revolutionary. And what the output from a documentation perspective that you can get. Yeah.

And you know, I know the full sort of set of capabilities with which AWS transforms provides. And some of that is actually starting to take this and build out the Java application based on the functionality of the mainframe. But this is where, you know, we have that off ramp to then go and get directly into Blueprint to start to pave the way to a more platform based approach.

So you guys saw Kerim take that document along with a video and import that into Blueprint at, at, you know, in his demo there to create a new application. Occasion, so I thought I'd pause on that screen. He showed for a second, because I think it's a pretty important screen and really important to what we're doing with AWS. So we have been working with AWS and other partners on, you know, these legacy transformation projects now for maybe up to a year, a couple of months at least.

And what we found out is, you know, source code analysis stuff that comes out of AWS transform or similar. It's really critical. It's gold for transformation, but it's really detailed and it can be sort of built around the architecture of the legacy application. So if you've got if you've got a million and a half lines of COBOL, that's going to result in a lot of documentation that you would have to sort through.

So we did a project in the public sector trying out this approach, and we ran AWS transform on a application for, for managing some something. And it had about a million lines of code. So it produced a couple thousand pages of documentation, which was awesome. And we'll take a look at that documentation in a second. But what maybe it didn't pick up on was some of the actual process that's being driven. Like it gave me a really good insight onto okay, what is each file doing?

What is the business logic captured in each file? What are the integrations which within each file. But it didn't really pick up on okay. What's the customer experience. When are each of these files called. What's the sort of typical flow throughout the process. So that's when we realized we needed to, you know, this was awesome stuff to really feed in and build out the meat of our application.

But we also needed to supplement that with some of the more high level business constructs, which needed to come into play to actually build out the workflows and almost the framework of the application. And some of those feature requests from you guys working with us early. You know, now we're able to produce some of that high level information, but absolutely bring in external data. Yeah, yeah. So it's really it's all of these inputs come together to create a comprehensive application.

And this screen that you saw Kerim pull up in the screen and Blueprint, you know, each of these capabilities, each of these types of inputs, which you can add into your Blueprint, there's a dedicated prompt under the hood which is pulling out different sets of information from different file types. So if you add in a video, that's going to really inform the personas that get created, also, the data that gets created for each workflow and some of the high level processes.

But if you add a source code analysis file, that's going to do more to build out your your integrations, your data model, and some of your deep business logic, which is captured in the Blueprint. So I think we're really starting to see us, you know, be able to take a lot of things and get our arms around an entire application all in one go. And we'll take a look at these files in a second.

But, you know, you guys know the rest of the story based on Karim's analysis or demo that he gave this morning. It gives you a complete analysis before it goes off, generates a Blueprint informed by best practices. So that's another really great opportunity, you know, leveraging this approach compared to going from COBOL to Java. We're going to go out. We're going to, you know, ask Pega best practices. We're going to ask our partners best practices.

We're going to ask the internet and say, what is the latest and greatest way to do this process, that I'm moving off of the mainframe and infuse the Blueprint with some of those thoughts. And then this is the really key one. Rather than going from code to code, we take a pause and we get business and it in the room to actually rethink and reimagine the processes before it gets moved out of the mainframe and onto the cloud. So let's actually get into it and let's see it in action.

And, um, you know, I'll just remind everyone that amazing, compelling mainframe video you saw this morning. So we're going to use the same use case here for a second. So we are in a card management application. So someone can come in here and they can, you know update transactions that can update an account like a Cardholder's account. You know, they can, you know, check in on billing stuff like that. So we had to learn COBOL just to make updates. He was making tweaks to the system.

I was like, okay, go you. Um, so we ran that through AWS transform. And Daniel, can you take me through some of the output? Yeah, let's let's look at it at a high level. Oh, boy. I could take them all through a thousand pages. Please have Adobe installed. Let's go. Nice. All right. So yeah, let's go into the. So you'll see that there's an application summary that's provided.

And it's intended just to give a high level at a glance view of the mainframe applications and all the data that was provided to it. So this picks up on key workflows key almost like user story level stuff. And then it's going to go into each one of those files that you provided and provide a summary of them. So there could be hundreds, thousands of files, right? And this is for both Cobalt Files and JCL. Yes. Awesome. And then what else does it produce?

That's because that is just this is the summary. That's the summary. So what else is going on here. Are we sure that we want to do it. No. I'm joking. Yeah. So you get a file for each file, a document for each file. Right. And what's in there. So now we're getting kind of program. We're getting to the program logic and functionality level. We're getting again there's the high level overview at the program level.

Um, but you're you're getting a lot more detailed about specific files that are provided for analysis. So this will go through in for each COBOL file. It's going to tell me what is this. What is this component do. How does it link into other components that I have. What is the business logic. Which is really, really good stuff for moving off of the mainframe, because that is one of the main, sort of considerations enterprises have.

How am I going to make sure that my my rules are run in the same way that they are in my mainframe system? And so looking at those copy book files and giving you data, objects, information, etc.. So it has some really good stuff, but there's also stuff.

I'll be frank, from a Pega perspective, we can kind of ignore, like there's some stuff in here around error handling and, you know, more error logging and more like application mechanical considerations, which if you're moving off of a code and into a platform, a lot of that functionality is handled inherently as part of that. So nonfunctional requirements. Yeah. Nonfunctional requirements. So there's some excellent stuff in here.

And then some stuff which I think are I will actually be able to skim by really quickly. So you know, when we ran this on that public sector clients, uh, application, as I mentioned, they had hundreds of PDFs for each of the files which we uploaded. So you saw Kerim this morning. He came in and he added the if I may. He added the top level summary page from AWS transform. So that is the you know, it's a couple pages.

It's going to give you a quick analysis of each file along with like a top level statement of what this does. And he added the video and that's great. But if if we're actually going in and starting to do this, you know, for real, we're also going to want to take a lot of the COBOL level files, which actually house the business logic. And how's the integrations and some of the considerations which we're going to need to take into account.

So that's the beauty of of this sort of multiple file analysis in Blueprint. I can take a bunch of those COBOL files. I don't have to read through them. I probably should, but you know, I won't for now, and I'll just pass it off to AI. And it's going to go it's going to go query all of those files, say, hey, what are the workflow components in here? What are the data components in here? What's the logic that's in here?

It's going to get answers from all those files and then synthesize it together. So really like rationalize what the application is doing into a new coherent model which I can then go take forward. And you know, from there you guys saw the rest of this this morning, but I'll just take a pause on the live preview. So this is going to go off. It's going to build my application. It's going to start, you know, potentially infusing some best practices into here.

So you can see here you know I have my new sorry my new case types, my new workflows all built out based on both the video and the file level analysis output from AWS transform. There is immense value in just getting. It's not even on Pega at this point, and there's so much value you can get from all these personas. All of the workflows that are just having a visualized view compared to a lot of documentation. Yeah, exactly. Engaging a business user on hey, we need to get off the mainframe.

Which would you rather engage them with this or even the code or something, which they can actually sort of relate to the process which is being driven through. Something they could go in and change. Yep. Exactly. So, um, you know, getting in there and having a workshop, rethinking some of those processes is core to the approach here. And then, you know, you can immediately see that where here. My video of course stopped. I have my this is my account update screen in the mainframe.

So I'm able to update credit limit cash. You know current balance first name, middle name, last name, whatever it may be. And I can see the same thing in here in my profile maintenance. So I have credit limit current balance. So you're able to show a business user immediately. This is what you could have if we continued on with this project. You could see all the same data elements were picked up, all the same workflows were picked up. Some of it was rationalized, some of it was optimized.

You went through, you introduced some new automations, and you're ready to move it forward to the cloud. So what do you call this? Do you call this a replatforming or refactoring or what? How do you describe this? It's a reimagine. No, I don't know. No, I think it's I think it's an apt description. Awesome. So that is what we're doing. AWS and AWS transform and Pega together. I think some of the business benefits are pretty apparent. Gets you off of the mainframe.

You could stop maintaining those systems. You're probably paying a lot in server costs and the like to actually operate those systems as well. So this helps. You. It's probably worth mentioning that there's no added cost to use AWS transform either. So it's not. You're getting charged for for the use of the actual service. Oh boy. What did I do? This is why I'm not in it. Oh, boy. Use presenter view. No, this is not what it does. Resume. Slideshow. Oh, boy. Pull the fire alarm. All right.

Well, yeah. So also, building off this approach doesn't just cut your cost, but sort of sets you up to be more agile in the future. Um, I think as we talked about, this is going to set you up to capture some business benefits through your modernization, not just, uh, you know, go from A to B in the cloud with that lift and shift approach. And also, it's going to set your developer ecosystem up to be more effective in the future. So you're not going to have to be as reliant on.

Ah, man, we need COBOL consultants. We need, you know, to pull in some people off the bench to help us with this project. And that's it. That's okay. That was the last slide okay. Actually we broke it. Too much innovation happening. Um, so with that, I think we definitely have some time for some questions. So we're happy to take them. Yeah. And please come up to the mic if you would. Thanks. Hi. Thanks for the demo.

How much time did Pega and AWS have to work together to customize those uploaded documents and create the life cycle and the cases within months? Like, how much time have we been working together? Like, what do you mean like customized?

So after you upload the documentation from the mainframe systems, the the program flow, the business rules, data dictionaries, etc., the PDF files, how much time does it actually have to be customized within the Pega Blueprint environment to make it appropriate to the actual workflow, because you can't just drag and drop and put that stuff in there, you have to make some changes to the Blueprint to say this is a decisioning workflow. This is a some type of a flow within Blueprint.

How much time does that take for this cards demo application? I actually found it to be like represent. Pretty. Quick to the mainframe system that was representative. And we've done a couple of like we helped an insurance agency get off of a Java application recently using Blueprint, and they were like, hey, we actually don't want to do any reimagination. We want to do a lift and shift type approach, but we want to get it onto Pega.

And Blueprint did a really good job just picking up on what they were trying to do using the source code analysis, which we had painted that picture for them, and they didn't have to make any changes to the or barely any changes to the Blueprint. It was more of a validation before moving it forward. And the other good news is you don't take our word for it. So the you can use AWS transform, you can use Pega. It's free.

And then you can take the card, the card demo sample and go and upload the same files and see what result you get and see. Compare it to what those mainframe screens look like. Yeah. Okay. Thanks. You're welcome. I have two simple questions. In case of in terms of integration, how does it's going to decide which type of integrations it has to choose and build? First question. Second one we have a lot of additional roles.

So does it create any additional roles and how does it decide which which decision rule it has to create? I didn't get that second one. Sorry. Second one that declaration and decision rules. Does it create anything with analysis. Like does it create the actual business logic? Yeah. So that is the next step. So right now when you take the source code analysis or really if you create a blueprint in general, this is where Blueprint is at at the moment.

There's no construct for actually defining the detailed decision tables or decision trees or whatever it may be. So that's not quite in there yet. Um, when you import something like an AWS transform output, you will get on each decision step a natural language explanation of the decision which gets carried forward into user stories for developers when they're actually going to build the application.

But of course, business logic in Blueprint is one of our next frontiers, so we're hoping to get to that over the second half of this year. And then the integrations that was around. So I think the question was, all right, I'm getting off the mainframe. It has integrations to other systems. How do I sort of replicate those. So that's where some of the other inputs into Blueprint become really, really helpful as well.

So if you have something like an integration document, maybe I doubt it with the mainframe, maybe. But maybe you have like an open API specification or something like that. You can import those into Blueprint and it will generate the data objects and the data models for your integrations automatically. So if you had something outside of the copybook files for an external database. Yeah. A key part of this too is you can take.

So, you know, enterprises always ask in these transformations, all right, where's my data going to go. Right. It's probably in DB2 or something like that. So part of of this as well is you're able to take SQL files, SQL extracts that detail your data model from a legacy database into Blueprint. It will analyze those. It will build you a data object model with fields and everything to replicate what you have on your mainframe system. And then you're able to make a decision.

Do I want to store that data in the future in Pega? And if you do, when you deploy your application, we will stand up PostgreSQL Postgres databases on AWS on your behalf automatically. Or do you want to store that into a new cloud native database? Maybe you want your own RDS or whatever it may be, right? So we can produce for you Ddls to then go provision your future state cloud native database. So there's kind of a couple different options. Data itself should probably be its own breakout.

But um, definitely. I think we're doing some cool things there. So mainframe has a lot of batch jobs. So how does it handle or convert it in Pega? And also like since the code migration is used using AWS transform, uh, is there anything for data migration as well? Uh, for the modern applications. When you say batch jobs, are you referring to. The schedule jobs batch? Oh, you said batch. Sorry. And they run for hours. So how does it get moved into Pega? Got it. Yeah.

So those occurred just the same. Um, in terms of, you know, if it's just a standalone run this report, it's probably going to result in some standalone workflow within within Blueprint. Um, whereas if it's like a part of an overall piece, then it would kind of get integrated within that workflow. Yeah. And I think batch jobs, uh, like replicating those one for one is a little tricky.

So if we were to get into the actual conversation, I think what we would do is probably analyze what are the batch jobs that are going on right now. What is their functionality? And can they be sort of infused into more real time processing, real time transactions, or do they almost get like, uh, are they like up for retirement? Because it is stuff like reports and the like, which, you know, you have new approaches to actually enabling. So I think we would look at the functionality.

Might be doing like aggregations on a field and then some other programs using that. Ideally, whenever you move to a modern language or modern platform, a lot of that work that you had to do and you had to run those batch jobs for you don't necessarily have to do anymore. Okay. And what about the data side? Like how do you handle data migrations from mainframe onto cloud, or is there an easier way to do that? Like for code migration you are using AWS transform and then Pega Blueprint.

So how about the data side of it? So it's a great question. So AWS has a number of tools to help with this. One of them is our database migration service. Pega is a heavy user of database migration service as they bring folks onto Pega as a service and Pega Cloud. So that would be one kind of tool that you would use to accomplish that. Thank you. Thanks. Hi from Aflac. So I have one question regarding the performance.

So the application is sitting in Pega, the user interface application sitting in Pega and the database sitting in AWS. How is the performance going to be? Is it identical to everything as it is there in on premise, or is it better how it's going. Compared to the mainframe environment? Mainframes? Yeah. Even if you use AWS Direct Connect also, so it cannot handle too much traffic, right? If you use AWS Direct Connect if it's a lot of traffic is coming through. So there is so it can handle.

So not that much as far as what I have been told so far. So so assuming we're moving from a monolithic mainframe application, we're going to a distributed system. And maybe you're using Pega Cloud. As for the business logic, and you have some kind of data database within your own environment that you've kind of connected it to. Yeah. Potentially you're using Privatelink, right. And you're so I mean, the limitations are are the ones that would come with with AWS.

Um, but I wouldn't necessarily be concerned about the limitations from a performance perspective, but that's part of you don't flip the switch on and go live right the next day. You do things like like load testing, integration testing. Um, but it's not necessarily an immediate concern that we've seen come to fruition. No. And as part of this, um, sort of legacy transformation approach, I was just just describing where you can spin up new databases in the cloud.

We've also built more native integrations to AWS, RDS and Google Alloy and whatever it may be, which leverages the more direct connect approaches. So it's not going always like over the internet to go and access the database. Yeah, it stays if you especially if you're using Privatelink. Right. Staying on the AWS backbone. That's a great question. Thank you. Of course, we talked a lot about football, but he's talking a lot of transportation too.

The question was, sorry, I got to repeat it for the camera. We talked a lot about COBOL. Do we support PL one? Not yet. Not yet. Yeah, but let's talk after. If you've got a use case. For need to use AWS transform in our current environment. Is it similar to your Blueprint? It's easily accessible. So what you would need would be an AWS account. And to log into that account and you can launch AWS transform from the console. So we need an AWS account.

We can't use the Pega account because you're already collaborating. That's correct. Yeah. And the AWS transform like so. It's secure in that. Like you can upload the COBOL code or whatever it may be to a private S3 bucket, which is where the analysis actually gets driven through. So it's a little bit different than Blueprint. I wouldn't expect you to upload source code directly to Blueprint. But will it be available soon or is it going to? Oh, we're good to go. Good to go. Oh yeah.

Yeah, I guess that's a good. Something I should get out there if you are interested in trying this out. Um, we are interested in trying this out, so, uh, we would love to work with you. We could do something quick. What we've done with other, uh, enterprises is a two week engagement. Where we come in, we pick a small mainframe application, whatever it may be, run it through AWS transform. That takes a couple of hours. Run it through Blueprint. Take a look at the output, see how it did.

So that's something we could do very quickly, very light to sort of get a taste of the methodology. And with that, have a great rest of your PegaWorld. Thanks for joining us.

Related Resource

Product

App design, revolutionized

Optimize workflow design, fast, with the power of Pega Blueprint™. Set your vision and see your workflow generated on the spot.

Share this page Share via X Share via LinkedIn Copying...