Zum Hauptinhalt wechseln

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
working on analytics

How have enterprise applications changed over the past 40 years?

Matt Healy, Blog abonnieren? Einfach anmelden ...

40 years of building for change
After being founded in Cambridge, MA by Alan Trefler in 1983, Pega is coming up on its 40th birthday. I’m just 31 – and feel ancient. My knees are starting to creak, I yell at the TV during Celtics games, and I rarely stay up past 10 PM. And yet, before I was born, the world’s largest organizations were powering mission-critical processes with Pega.

In fact, Pega’s first two clients, dating back to 1983 – Bank of America and Citibank – continue to partner with Pega to streamline customer service, transform business processes, and personalize customer experiences.

Lots has changed. In the 80s and 90s, large banks were using Pega to automate microform management for check processing and mainframes were the application infrastructure of choice. Over the years, Pega’s architecture has continually evolved – to provide enterprises a foundation to adapt these types of mission-critical workflows without disruption.

Along the way, the software and hardware that comprise a “modern” system have continually shifted. But there have always been consistent expectations organizations have for the platform technology that powers their enterprise-grade applications:

  • Flexibility: Easily plug into a broad ecosystem of software and hardware
  • Agility: Provide a foundation that enables rapid innovation
  • Scale & performance: Seamlessly serve millions of customers

I wanted to learn more about how we got here: How does a technology organization like Pega approach architectural evolution? What’s changed over the years? And more importantly, what’s stayed the same?

There’s no better person to explore those questions with than Mike Pyle, Pega’s Chief Technology Strategist.

Flexibility from the beginning

Matt:
Thanks for doing this, Mike. So starting off, when did you join Pega?

Mike:
Oh wow, we’re starting there hah. August 1985.

Matt:
And, I know that’s a few years ago, but what did Pega look like from an architectural standpoint at the time?

Mike:
What was very unique about the architecture from the start was that Pega’s platform was infrastructure agnostic. Not dissimilar to today – where Pega enables enterprises with Cloud Choice – the ability to run where they want – both through cloud managed services, which now provides options for AWS and the Google Cloud platform, and through client-managed environments.

In the 80s, there were two major enterprise computing hardware vendors - IBM and Digital Equipment Corporation (DEC). Before Pega, both Alan and I had worked at software vendors which only supported DEC mainframes, even though IBM had a larger share of the market. And what we learned was that, really, enterprises need flexible platforms, which can run on the infrastructure which they chose very purposefully – to optimize for overhead, cost, compliance, data, and more.

“…enterprises need flexible platforms, which can run on the infrastructure which they chose very purposefully – to optimize for overhead, cost, compliance, data, and more.”

Mike:
So from the beginning, Pega’s platform was able to run on both IBM and DEC mainframes. And this taught us a lot about building a platform-agnostic platform.

The architecture of an IBM mainframe was centered more around batch processing. You’d have lots of users interacting with the system, and what was actually going on is they were running short blasts of processing. And then they were getting completely removed from the system and somebody else would then run a short pass through the processing and so on. So, they've gone back and forth, back and forth, back and forth.

Alternatively, the architecture of the DEC mainframe was much different, very stateful. You had terminals connected and the terminals would interact, and they were always resident in memory – context was maintained.

So to try and marry those two architectures, which is actually quite challenging. That taught us a lot about the discipline of separating yourself from the platform.

Those specific facilities of the hardware were way too much for business users, and bogged down IT engineers, so we tried to abstract the complexity of platform support away and provide a layer which was somewhat agnostic about it.

Providing a foundation for innovation

Matt:
What did Pega even offer to clients back then? Was it a platform like today or something different?

Mike:
From the start, Pega has been a low-code platform.

In the very early days, our platform had mostly been used by banks to automate payment investigations workflows. We were selling into large banks. These banks had established workflows and systems. And, justifiably, they want things done their way.

And we had experience with what it was like to engineer extremely custom solutions – if you’re coding each and every requested customization into your solution, at some point you won’t be able to upgrade or support it anymore. You end up losing control of the versions.

So we wanted to make it so that you could personalize the system a lot without having to change the code at all.

Matt:
How did Pega go from automating payments investigations to a more open low-code platform for AI-powered decisioning and workflow automation?

Mike:
Eventually we looked and said, well, you know, payment investigations aren't that far away from check exceptions, right? Or credit card disputes? Or any workflow for that matter.

There are common concepts, steps, and attributes to almost all work – from the way it is received, routed, researched, responded to, resolved, and reported on.

So we decided to take core workflow concepts from our applications and turn them into a common foundation. And so, after about a six-month engineering project, the Pega Platform™ was born. And since that time our mission has been to provide a powerful low-code platform for AI-powered decisioning and workflow automation – to free the world's leading organizations to innovate and adapt to change.

“…we wanted to make it so that you could personalize the system a lot without having to change the code at all.”

Continuous platform evolution

Matt:
So by 1987, Pega had shifted from business applications to low-code platform for workflow automation, which could run on DEC and IBM mainframes… Where were the frontiers of architectural evolution from there?

Mike:
From there, our focuses have been twofold: expand and enhance the capabilities of the platform to deliver transformational value to business processes while continually adapting to shifts in infrastructure and technology so that we could support the advanced scale and performance needs of the world’s largest enterprises.

The IBM and DEC mainframes and their languages were starting to get kind of long in the tooth. And emerging were client-server approaches combining new hardware and operating systems – like Sun Microsystems, new Linux-based IBM machines, Unix-based HP machines, Windows servers, etc. – with software advancements like more performant, hardware-independent relational databases and application servers.

So we took the opportunity to completely re-architect the system. However, our goal was to make it completely backward compatible. So applications built on the Pega Platform when it ran old mainframe systems (backed by languages like VAX and CICS) could be seamlessly migrated to the Pega Platform running in a (then) more modern client-server approach on new hardware and operating systems (backed by C++).

So we not only rearchitected the Pega Platform, but did so in a way that it could convert applications from old to new seamlessly – so enterprises could effortlessly bring forward their workflows, processes, and customer experiences to new technology… which from an engineering perspective was not a small task – and required us to get down to the byte level.

But at that point we moved clients from mainframe to running applications in a client-server, three-tier architecture… which opened up a lot more flexibility in hardware, databases, operating systems.

From there, the next iteration was all about acknowledging the internet and moving to web apps.

From there, we transitioned the core engine to C++ then to Java and transitioned infrastructure to the cloud – while maintaining the scale, performance, flexibility, and backward-compatibility which our clients expected.

“So we not only rearchitected the Pega Platform, but did so in a way that it could convert applications from old to new seamlessly – so enterprises could effortlessly bring forward their workflows, processes, and customer experiences to new technology.”

Brings us to… cloud-native apps with Project fnx

Matt:
Yea, I appreciate how there have been these core tenets for operating enterprise-grade applications which have been core to Pega’s continual evolutions. Getting to the latest evolution, Project fnx, which has enabled clients to run on a modern, powerful cloud architecture for enterprise-grade applications. Can you tell me what the goals were and how we approached this?

Mike:
So long before Project fnx we had been operating in the cloud via Pega Cloud, and supporting clients running their own environment essentially anywhere… but as cloud computing evolved, we knew there was a lot more value we could be passing onto our clients from adopting more cloud-native approaches and technology.

So at a high level, Project fnx was really all about adopting new cloud-native architecture in Pega Infinity™ – enabling greater performance, scalability, agility, and flexibility for our clients’ mission-critical enterprise applications.

Matt:
How do you get going with that sort of project across a 1,000+ person engineering team?

Mike:
The answer is you don’t start with 1,000 people. We had a small group of architects and senior engineers come together to lay out the principles of the project and an idea of the high-level target architecture, and then iteratively worked with small groups of teams to evolve different aspects of the architecture.

We knew we wanted to incorporate microservices backed by leading-edge cloud technologies for increased agility, innovation, and resiliency.

My personal goal was to get new features out to our clients on a more frequent basis; and how you achieve that is through small releases of independently developed, tested, and deployed components and services.

This also gave us an opportunity to take another look at the technology we use in our platform. Like a few years ago, NoSQL databases were kinda Mickey Mouse – scaled storage or large objects weren’t extremely performant at the time. So we engineered different approaches to large, dynamic object storage using encoded data and relational databases (we call it the BLOB)… But nowadays NoSQL databases have come a long way, so we have the opportunity to make them more core to the architecture for real-time search, reporting, lookup, and more.

We knew we wanted to incorporate cloud-native technologies like Kubernetes and Docker for deployment repeatability and elastic auto-scaling. And along with these, evolved cloud operations capabilities, which took advantage of these approaches to deliver clients cloud management self-service and automation.

And finally, we knew we wanted more stateless applications and a new, modern front-end architecture that incorporates technologies like React and web components for flexible, consistent, responsive end-user experiences.

So we could get those goals and principles out across the engineering team, and work iteratively to put them into action.

Matt:
I was on the product engineering team at the time, and I remember this really exciting the culture. Everyone was stoked about the opportunity to shape a cutting-edge platform, and work with the latest and greatest technologies.

Mike:
Yes, absolutely. And because we had adopted a scaled agile organizational approach and effective DevOps practices a decade prior, we were pretty seamlessly able to put this into motion as our teams were largely already structured around what would become microservices.

“Project fnx was really all about adopting new cloud-native architecture in Pega Infinity – enabling greater performance, scalability, agility, and flexibility for our clients’ mission-critical enterprise applications.”

Matt:
Where do we go from here Mike? What are the current frontiers of innovation?

Mike:
It’s funny. Our focuses continue to be what they always have been: expand and enhance the capabilities of our low-code platform for AI-powered decisioning and workflow automation, to deliver transformational value to business processes while continually adapting to shifts in infrastructure and technology, so that we could support the advanced scale and performance needs of the world’s largest enterprises.

So we’ll continue evaluating new technology, approaches, and architecture which will help along that journey.

Join us at PegaWorld iNspire this June 11–13 at the MGM Grand in Las Vegas to see the next evolutions in enterprise applications. And learn more about Project fnx and how it has enabled clients to run on a modern, powerful cloud architecture for enterprise-grade applications.

Tags

Thema: As-a-Service Thema: Cloud

Über die Verfasserin

As a Senior Product Marketing Manager for the Pega Platform™, Matt Healy helps the world’s biggest brands build, automate, and engage at scale with our best-in-class, unified, low-code platform.

Weiterempfehlen Share via x Share via LinkedIn Copying...
Weiterempfehlen Share via x Share via LinkedIn Copying...