Generative AI technology + low-code architecture will unlock unprecedented value and innovation
Over the past few weeks, we’ve had a lot of fun showing our clients previews of the 20+ generative AI-powered boosters we’re introducing in Pega Infinity ’23. The business and IT leaders I’ve spoken with are excited about the ways in which we’ve integrated these powerful generative AI models into Pega’s low-code App Studio, Customer Decision Hub, Customer Service, and Sales Automation solutions, all while addressing their concerns about privacy and control.
How does generative AI work, you ask?
In our client conversations we hear a number of common questions:
They ask, “How do we get value from generative AI?” So we show them all of the generative AI tools we’re delivering with Pega Infinity ‘23 to accelerate application development, help marketers build more effective engagement programs, and make it easier for employees to sell to and service their customers.
They ask, “Is this safe?” So we show how we’ve included rules, audits, and processes that keep humans in the loop and in control, and how we detect and replace PII data before it can be sent to external services.
They ask, “Can I choose which generative AI models I use?” So we show them the API abstraction layer that lets clients bring their own license keys and swap in the models of their choice, either from public cloud or their own private services.
And then they ask, “How did you do so much so quickly?”
A model-driven platform and our building for change heritage
For one thing, our AI and engineering labs have been studying GPT and large language models for several years, so we had a head start when ChatGPT unleashed a flood of executives googling “What is generative AI?”
But more than that, we were able to move quickly and effectively with generative AI technology because of how Pega’s low-code platform is designed. Our 40 years of experience automating workflows and customer engagement have led us to an architecture that is perfectly suited to tap into the disruptive power of generative AI, both today and well into the future.
Long before low code entered the buzzword hall of fame, Pega was built as a 100% model-driven platform. Pega captures everything you need to build and run an enterprise-scale workflow or decisioning application, not in code or a proprietary scripting language but in a model – in metadata – that holds the information necessary to execute a workflow application. Our commitment to this approach led us to build a complete model, capable of capturing all the information – ALL the information – needed to describe, run, and maintain a workflow application. And because our global clients run massive, mission-critical apps on Pega, our low-code architecture was built to ensure that any workflow defined in Pega can run at the massive scale, global reach, and trusted security that our clients demand.
The model reflects our Center-out™ business architecture. We know that a great application starts in the center, with the decisions, workflows, and cases that deliver customer outcomes, and then connects out to the channels where your customers and users interact and the systems where your data resides. Pega’s metadata model is designed to capture the information needed to fill in the key pieces of a Center-out business architecture:
- What are the workflows you need in your application?
- What are the stages and steps that your workflow most go through?
- What data do you need to collect throughout the workflow?
- What personas or people interact with or participate in the process?
- What other systems do you need to interact with to get the data you need?
- And countless more…
Idea to enterprise app without writing code
Having a well-structured and functionally complete metadata model allows Pega clients to build robust applications without writing code. It allows business experts – who know the answers to those questions we asked – to collaborate with IT directly in a shared design environment. As our software writes our software, it has allowed us to continue to evolve our technology architecture while bringing our clients’ existing applications forward by simply generating different runtime components from the same metadata model.
Our model allows us to incorporate generative AI in ways other platforms can’t. Because Pega’s model is complete, well-structured, and tested over many years of capturing information from business experts, all we had to do was prompt generative AI to answer the same questions we allow business and IT experts to answer when they configure our system. We can ask a generative AI service, “What are the primary stages/milestones and multiple steps for a home loan application process?” And when it gives us the answer, we just map that data into our metadata model of the stages and steps for a process and – voila! – we just got generative AI to help us build a workflow, all in minutes or hours, not days or weeks.
We can do the same for the data fields, system integrations, personas, and test data. We can ask generative AI to structure a report or map data from a Representational state transfer (REST) service or define a new 1:1 marketing treatment – and all we need to do is put the response into the appropriate metadata model and that information becomes a runnable component in our system.
"We get the speed of generative AI, the security of keeping a human hand firmly on the wheel, and the enterprise scale Pega has been proven to deliver."
Because we are mapping the response from generative AI into the same metadata model a Pega low-code developer is populating, the apps we have started using generative AI in can still be changed, configured, and maintained using our low-code tools. And because that metadata model was built to assemble all of the pieces needed to run a decisioning and workflow application at enterprise scale, the resulting app is more than just a toy – it’s a rich and robust enterprise application. We get the speed of generative AI, the security of keeping a human hand firmly on the wheel, and the enterprise scale Pega has been proven to deliver.
We are still at the start of this journey. There isn’t any part of our low-code application we couldn’t accelerate by asking generative AI a question and mapping the response back into the model. We’ll soon be able to automate far more complicated steps of the development process – all while staying within the low-code environment. And all of this is possible because of our architecture. Because we got the model right.
Learn more about Pega’s GenAI capabilities here or track me down at PegaWorld iNspire and ask me about how we use our own low-code tools to build our low-code platform. The model is literally defined as metadata within the model itself. That means we can easily expand the model to add new concepts – like “Connect generative AI” – simply by configuring metadata. But I’ll save that one for another time.