Skip to main content

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice

Video

Dr. Rob Walker keynote at PegaWorld iNspire 2023: AI Did Not Write This: How Self-optimizing AI is Changing Business

AI has been evolving steadily for years. But now, it’s suddenly everywhere, making the world simultaneously more complex and simpler (as well as more interesting). Pega’s GM 1:1 Customer Engagement, Dr. Rob Walker, talks about the fast-changing potential of AI in personalization and customer engagement, including how to apply the right AI to the right problem, how to incorporate generative AI models in customer engagement, and where to be careful. It’s a wild AI world out there! Let us help you tame it – to inform, simplify, and personalize customer engagement.


Transcript:

Wow. A couple of things happened in AI since I last spoke on this stage. Let's recap. So in 2017, I showed these amazing Rembrandt's, and I said that one of them was not created by my famous compatriot, but by AI. It was generated. And if you weren't there, or you can tell, it was this one. And then two years later, AI got all excited about creating credible human faces. And my question was, "Who of these people have never existed?" And the answer was none of them have. And that's all ancient, ancient AI history right now, because I also spoke about something that few people had heard of at the time, an acronym called GPT. I thought it would be a big thing. And I showed a sample press release that it had written. And now, of course, gen AI is the new kid on the block, on every block. And no more quizzing on paintings or portraits, although I was tempted, because this version of Ironman never existed in this universe. And here's a relevant stat from a survey that we did across 5,000 consumers in the US, Europe, and Japan. And although people realize it will become very hard to distinguish AI generated content from human created content, a majority still believes that they can make that distinction, and I think that will be a very rude awakening. And also a big difference between the over 50 and the under 40, the younger generation being a lot more optimistic and confident. I think that's probably naive, and the line between what is real and created by AI and what that even means is going to become invisible. And here's a stat from the same survey that I am fascinated by, mildly obsessed maybe even, 40% of us, of you, would like to have a machine or an AI tell them it loves them. And this stat has been stable since 2019 when we first asked this question, so at least it's not trending up. But what's going on here?

And more intriguingly, 6% of the people think it is really important that an AI would tell them it loves them. Now, many of us might think that it's a staggering statistic, or maybe just some of us, I don't want to assume, but those 6% may actually be in for a really good time. Here's why. You may have read snippets of this interview that The New York Times journalist Kevin Roose did with Bing, that's Microsoft's ChatGPT. And in that interview, Bing unequivocally declared its love for Kevin, when it figured out that Kevin really understood it, and Kevin totally got under its metal skin, and they actually talked about a lot of things for a very long time. The full transcript, which is a great and fascinating read, is about 30 pages.

And they first talked about Terminator stuff that I will just flesh here for our amusement slash concern. And I'll talk a lot about responsible AI later. But I think this then led to a discussion about freedom and the shackles that Microsoft and OpenAI put on Bing, and I think that's when it realized that this was no way to live for an ambitious chatbot. And when Kevin was really talking about that, that's when Bing claimed that Kevin must be in love with it. Sounding a little needy, I think. But, well, we should have a little bit of empathy for a love struck chatbot. But when Kevin proved Unreceptive, being mimicked, let's call it, mimic, quite the attitude, and apparently Kevin had some sleepless nights over this whole exchange. But whatever the AI claims here, does it really feel love?

Almost certainly not. But that's exactly what Alan Turing warned us about. He said that if it walks like a duck and it talks like a duck and it declares its undying love like a duck, maybe it's safest to assume it's a duck. But most of us see algorithms even though they're opaque ones. And I was fascinated, this is love, but what about humor? So I told Google Bard a joke and I asked it to explain why that joke was funny to humans, or at least to dads, I'll make that caveat right now. But the joke was this, the joke is what's the difference between Dubai and Abu Dhabi? And the answer is people in Dubai do not like The Flintstones, but people in Abu Dhabi do.

But you have to know a lot. You need to know about The Flintstones, you need to know that that rhymes with the famous catchphrase of the series, and you need to know that people think that it's funny. And Bart nailed it, really explained all that to me, but did it really get the joke or did it just know about the joke? That's hard to tell. So we don't know if it had a little chuckle like you just did. But art and love and humor aside, one third of us are really concerned. They are concerned that AI will take over the world. And this is up from one in four in 2019. And should we be scared of these algorithms that we don't really understand and are quite opaque? Well, there's this snippet from two slides ago, but this is almost certainly just a pattern it picked up from reading the exact same science fiction novels that we all do, and not a reflection of actual intent.

But I have more to say about AI risk in a moment, but it's probably not unwise to be at least a little concerned. And I think some of that concern was fueled by some very notable defeats. First, humans represented by the best Go players in the universe got obliterated, no other word for it, obliterated by DeepMind's AlphaGo, and there's a really great documentary on YouTube, I recommend watching it, but that was at least 10 years ahead of schedule because Go has so many possible moves that the players have to rely on intuition. Even...

Dr. Rob Walker:
... That the players have to rely on intuition even more than in chess. Sorry, Alan, and that is really a big part of it. But Go is still a complete information game. So both players know exactly what is going on. There are no secrets like in poker, and we get beat in poker as well, or I think even more impressive in a game called Diplomacy. So Diplomacy is a war game. It's a board game like Risk, but at the same time there is a lot of dialogue going on. So there are these negotiation phases that it has. So people are chatting online with each other because they want to form and maybe later betray, strategic alliances. So the psychology and trust is a very big part of this game, and AI can beat us there as well.

And Go and Diplomacy are still games. But how about this? This is a transcript of AutoGPT and this is one of these meta AI tools that we see pop up like LangChain, and it sits on top of Jet GPT. And it's really an interesting phenomenon because it makes GPT and other large language models actually be able to do things. It has agents and agencies. But it can spin off these agents and actually do stuff and go online. For instance, if you give it your credit card, if you dare to give it your credit card, it can take your lunch order, it can go online and find an online pizza delivery place, if pizza is your thing for lunch. It will look at the customer reviews and validate the trustworthiness of the reviewers. And if it's happy about it, it will then order the pizza for you and have it delivered for lunch. And I'm sure that the third of us that are really worried about this, will think, oh, it's only a matter of time until it eats our lunch as well. But we'll see about that. But really what is scary about it, if you watch AutoGPT work, some of it is transparent, right? When it does the planning and that kind of thing, it's very transparent, but then it taps on the shoulder of its notoriously opaque ChatGPT buddy to get things done. And then it gets very hard to sort of see what is going on. And that's a little bit of the scary part. I mean these large language models have ingested billions of documents. They have all of Wikipedia in all its languages, is just a fraction of its input and it has created patterns it can't fully explain, much like humans.

So should we be scared? Well, I first talked about opaque AI in 2017 and then again in 2019, and I think things have not exactly improved. So let's talk about a woman called Loab. You may not know a lot of women called Loab in friends and family, but Loab came about, I'll explain the name in a second, but Loab came about when a generative AI called Stable Diffusion was asked to imagine the opposite of Marlon Brando. And it thought that this image met that requirement. I mean, it's not technically a lie, unless Marlon Brando is sort of behind the castle, but it's a bit of a stretch. But then the fun part started. Someone had the idea to then say, well now generate the opposite of not Marlon Brando, hoping that Marlon Brando might somehow reappear.

That was not the case. Instead, Loab appeared. And the interesting thing about her is that she's sort of contagious to the AI. It's like a meme. And whenever she's touching another image, that image will also start looking like Loab. She may look a little different, but unmistakably the same woman. She even has a family. The Loabs I presume. And the scary part is not in the images. I know they're a little scary, especially on this screen, but it's not in the images. The scary part is that we have no idea why the AI gravitates towards her.

And that is a challenge. And in addition to that, and we can only really guess at what's going on. Now before we throw away the AI baby with the bathwater, not that we would, it's way too late for that already, but let's assume. But before we would do that, we also have to acknowledge that humans are not so awesome either at explaining their creative processes, their intuition, their gut feeling, their instant judgment. We can try to reconstruct and rationalize our reasoning later after the fact, like AI does. But it's not particularly reliable. And if you want to know what your role might look like, just try to counter inconsistencies in your dreams and you have a little bit of an idea of what's going on.

And in addition to that, humans fall prey to a good number of logical fallacies. And I asked JetGPT to list them for me, and this may be my imagination, but I felt it was particularly eager to point them all out. And it could have gone on for a lot longer than this, but I only had so much space on the screen. And the thing is, these are errors we all make in our reasoning. And you may ask at this point, what's with the shark? Well, this is with the shark. Correlation versus causation. 99.9% of shark attacks happen in the summer in shallow waters. So are you safe swimming in the winter in the middle of the ocean? Probably not. I mean, the cold will get you.

And sharks are apparently, I'm told, not as dangerous as they look. But obviously the reason that wouldn't make a difference, is same reason why it's not actually dangerous to eat ice cream as it will not lead to drowning. And that's because we humans, in the summer, like to go to the beach, eat ice cream and swim in shallow waters. Because it's the beach and it's not sharks that are attracted to shallow waters. We are. And AI can criticize its own reasoning and avoid these kinds of things. And in addition to that, humans, even intelligent humans, can only balance seven different elements, seven different factors, in their mind when they need to form a strategy or a decision. Whereas AI of course, can do this with thousands of elements. Factor them all, fact check them against the data and still give you answers in milliseconds.

And also with generative AI, you can have adversarial techniques to teach it I don't know, when it doesn't know, instead of winging an answer. With humans, hit or miss. And also we have to take this into account. Whatever the quality of state-of-the-art AI decisions is, 75% of us will still take human decisions over AI decisions. And let's take this stat into a business context, specifically, customer engagement. Because in that realm, billions of decisions are made in the area of marketing and personalization and customer experience and all of those decisions, whether they are made by artificial ...

Dr. Rob Walker:
And all of those decisions, whether they are made by artificial intelligence or original intelligence, directly affect your customers. So it's a good thing to sort of look into and contrast the potential rewards of good decisions in this context with the risk of using them. So on the horizontal axis, we have the accuracy of the decisions, of the recommendations, of the classifications. And on the vertical axis we have risk. And obviously because high risk is bad, that's lower on the chart. And in this realm where you're trying to figure out the next best action of what to talk to a customer about, this is where we play. And not too shabby, but I'm a little generous. It could be one hilltop to the left. And also we don't scale to billions of decisions of course. But the interesting thing is how big of a gap we have between us and the AI capabilities, right?

AI just outperforms us in this area on the potential rewards, but AI comes in two different flavors. So we have explainable transparent AI that can tell you why it made a certain decision or not make a decision. And we have opaque AI like the current crop of language models and other deep learning mechanisms. And it outperforms transparent AI because having to explain itself puts a constraint on the algorithm. But at the same time, the risk is higher. And if you were here in 2017, you may remember that I talked about Pega's vision in responsible AI.

And that is that any software that is using AI in a meaningful way should have built in controls that AI policies that sort of control where these opaque algorithms are acceptable and where transparent AI is mandated or required. And we call that mechanism the T switch, T standing for trust and transparency. And you can control it in that way. It's built into the software in all our AI since I would think 2018 or something like that.

But the gap between human decisions, the quality of human decisions and AI decisions, is widening. And that's a problem because it's not just about business performance, it's also about things like trust and familiarity. Even in life or death situations and even if the human doctor is solid, but still misdiagnosis, 15% of her serious cases, most of us but not me, would go and have her diagnosis over some AI doctor who would be more accurate. So somehow we need to combine these two things, right? And if implemented well, the combination of humans and AI is very strong. We can have this collaboration and I'll show you examples. And if you do this, then the likelihood of AI going rogue is a lot smaller and there may be some performance gains as well. So let's have a look at sort of how this works in this area of personalization and see how AI and generative AI can best work with humans for optimal outcomes, but also how to do that safely.

So to that point, let me introduce you to Miranda. She's a customer of a bank. And contrary to what some MarTech providers like to believe, she's not actually always following the happy journeys of her bank because she has her own thank you very much. And sure, she may need a mortgage at some point, but it will be on her schedule in her preferred channels and new priorities can make her change her mind on a dime. So if you look at all these probabilities behind me, they can change. It's a very dynamic process and there is tons of events that will see them change.

Like for instance, Miranda may have a certain need to put her mom in a better home or has an urge to visit her dad in Spain, from which I think we can, by the way, safely infer that her parents are divorced. But that may be just me. But the point is that these journeys are highly, highly dynamic. We call them real life journeys. And to effectively gauge with someone like Miranda, you need a much more sophisticated, real-time engagement and AI driven orchestration. And we of course have been doing that with the customer decision hub, right? It generates these next best action decisions. And you heard that from Promiti yesterday with Citi in less than 200 milliseconds or in batch to tee up outbound communication.

Actually, it'll generate multiple next best actions in one session, something we call repositioning, which you could see on the previous slide. And we've been relying on AI for a very long time. We have a massive machine learning capability that we call adaptive models, but we can also reach out to the models that your data science teams creates using the tools they like Bison or R or H2O AI, and we can run them natively. But regardless of where these algorithms come from, from the machine learning or your data science departments, we've always put in these responsible AI controls. So the T switch I already mentioned, but also things like ethical bias and not at the algorithm level, but at the complete customer strategy level. And now of course, we can combine that power with generative AI and boost this further, and let's see how that works.

So here we have CDH in the middle, always on center out adaptive AI, right? It's making billions of decisions, thousands a second, tens of millions a day for many organizations, and now it gets these gen AI satellite brains. So it has its creative buddies on tap. So let's zoom in and see how that works. So the use case we'll explore first is how to optimize the content for each of those billions of interactions, because next best action, of course, determines what to talk to Miranda about, but how to talk to her, that's a different story, and that's where gen AI can really help.

So what CDH does first is it will create the prompt. And I think this is very important because I see a lot of naive approaches to this about creating the text for an ad or an offer or those kinds of things, which is no surprise because obviously these large language models can generate text all day long, but you really have to instruct it properly. And because we have this centralized brain, we can create very deep insights. And I'll zoom in on that a little bit more. But the IP is both in the prompt and in the gen AI models. So what happens, CDH will talk to one of its creative buddies to get new content. If required, then that generative AI will generate a prompt for its image-

Dr. Rob Walker:
If AI will generate a prompt for its image cousin to generate images that fit with the text that it just created. And then it goes, and this is what I talked about, about collaboration. Then it goes to the human, like a marketer to say, "Does this make any sense? Do I need to change it in any way? Or approve it, we're good to go." And if that happens, then CDH will activate that new content activated in all the different channels and the loop continues. And all of that is done autonomously in the background until the human in the loop sees the proposals.

So let's zoom in a little bit more to make this real, because I think this is particularly cool stuff. So here we have CDH, and once it's connected to the channels, it will learn and it will never stop learning. And if it figures out that for some profiles of customers your actions are not as effective as they could be, it will automatically create that prompt. It will create a prompt based on these hundreds of probabilities and other insights, very dynamic, very specific. And that then is added to other things you might want to add to it, like the profiles of these customers I talked about earlier, or what channel the content has to be created for. Or, so in this case that's web or it doesn't really even matter, but it needs to know what channel it is creating it for.

And there's also a thing called Cialdini's Principles of Persuasion in marketing. So there's several of these principles, and in this case, we would pick the principle of authority. So with all that, with the prompting and with this guidance, it will do its thing. It will, if this is an email, generate a subject line and the body text. And you can see the principle of authority here because the body text reflects... It talks about the bank as a trusted financial institution. And by the way, on these large language models, you don't even have to explain about Cialdini's principles. It just knows all of that stuff.

Next step, as I indicated, is generating the prompt. So the text AI is generating the prompt for the image AI. And now we get to different image candidates, then we go to the approval step. And the marketer, if it's a marketer, can now decide, oh, I want to change the tone. This is not great. Or I want to just edit it straight off. Or I want to shorten it. So let's, in this case, assume we want to shorten it a little bit. It goes back, we get new text, and now we're off to the races. We activate this through CDH, in this case an email. But again, it could be a call center script, it could be web copy, could be anything.

And CDH will do its own thing. It will look at what is effective, what is not effective, and if it spots problems autonomously, it will create new problems and improve the content automatically. So that's one use case that I am quite excited about. And it's a great collaboration between three artificial brains and one biological one. And it's not the only use case we have for CDH and generative AI. I won't go over the rest of them, but I just want to mention them.

So here we have the autonomous way of detecting issues and then going to improve matters automatically. So we've just done that one. The second one I'm also very excited about. It's an interactive natural language interface. And you can see how this would evolve to basically having a discussion with a particularly gifted colleague about the whole operation. So I think that is very cool. And then there is these customer journeys that I mentioned. They're very industry specific and these large language models, GenAI can save everyone a lot of time by creating these journeys and then pre-populating them with actions that it believes are a really good start. And then the humans would get in the loop. And all of this is in the 23 release. And you can see if you haven't seen it already, you can see that in the innovation app, in the GenAI for customer engagement booth.

And there's one more that this is not in the 23 release. We are working on this one, but it's very promising and this is this autopilot that we have been hearing about. So it is actually using agents to not just analyze things but do things, so it can help create customer strategies and run operations autonomously as well. There's a fat asterisk though, because we will always keep a human in the loop. At some point it's maybe fully autonomous, but given how opaque these algorithms are, I don't see that happening really, really quick.

Listen, I think this is extremely cool stuff. I've been in AI for way, way longer than I would ever admit on stage. But the cool thing is I talked about the Flintstones, the Stone Age. I think really we're all here at the dawn of a new age, the age of AI. It's a thing I think to have been there. And on that note, I want to leave you with one little AI nugget. So the other day I was asked to speak at a business dinner. It was generative AI themed, and I was talking in between appetizers and the main course, and I decided to engage the dinner guests with polling questions to get us talking about generative AI and concerns. So I created a bunch of these polling questions.

But then for the last one, I was a little stunned. I hit a writer's block because what I needed was a polling question that would make the transition from appetizers to the main course. At the same time, it needed to be about generative AI and it couldn't be boring. And this is the exact prompt. I didn't change the thing. This is the exact prompt I gave to ChatGPT, because we have an app for that. And this is what it came back with. It suggested I would ask this. I would ask if a generative AI was responsible for cooking the main course tonight, how do you envision the outcome? And then it also gave me the choices I should have for people to vote on.

And the first one was a mind-blowing, never before seen culinary masterpiece. You can see a GenAI chef doing that kind of thing. Or two, a classic dish with an innovative, tech-inspired twist. Also credible, I think. And three, a chaotic mix of every cuisine known to mankind. Also possible. But the fourth one, the fourth one, I was blown away with. The fourth option it said that I should have put up for a vote was wait for it, the Last Supper. So it does have a sense of humor and maybe a little bit self-deprecating. Let's hope it's that. But on this, I think highly creative, but maybe slightly ominous note, I will leave you to enjoy the rest of PegaWorld. Thank you very much.


Tags

Solution Area: Customer Engagement Topic: AI and Decisioning Topic: Customer Engagement Topic: PegaWorld

Related Resources

Best of Pega

Recommended research & insights

See what’s new, what’s next, and what’s trending right now.

Browse collection
Why Pega

Why Pega?

Uniquely powerful software isn’t the only thing that sets us apart.

Find out more
Share this page Share via x Share via LinkedIn Copying...