Gennai is hallucinating a lot. Here are a couple of quotes from the New York Times just from last month. Right here is one great West Coast race in Philly. Hard to do. Unless, of course, these language models anticipate. Like the mother of all climate changes. Let's hope not. Um, but or this one, you know, very underwhelming dietary advice. So this still happens.
This still happens. And the problem here is that these models are the very definition of opaque. Nobody knows exactly when they will hallucinate or how to prevent it. And contrary to what you might think, it's actually getting worse. I talked about the Turing test. A while back, and I know most of you will be familiar with. But Alan Turing, of course, was the pioneer of computer intelligence. He built the first thinking machine which cracked the Nazi codes in World War two. And and he anticipated that at some point, you know, these computers, these thinking machines would become so intelligent that it would be very hard to distinguish them from humans.
And he came up with this very pragmatic test. He said, if we let a human jury talk to an entity, either human or computer, so either original intelligence or artificial intelligence, and they couldn't see that. So they're in a separate room and they communicate with a typewriter. If the jury couldn't figure out who was who, which was which, then the computer passed would pass the Turing test. So does AI pass the Turing test? Well, I contend that, let's say eight years ago, nobody, nobody who would be talking to a chatbot, a modern day chatbot, would even suspect that there was not a human on the other side of this. Interestingly, they just did this test with ChatGPT 4.5, and apparently ChatGPT, um, fooled humans 73% of the time, which in math means it's outperforming real people in believability, which is, you know, amazing. So anyway, what did humanity do? Well, you know, we moved the goalposts.
So now we're talking about artificial general intelligence, right. And that's a higher bar because now we really the AI needs to be smarter than most humans, not just the average human. And um, the consensus is that this will happen in the next ten years, but a lot of experts think it will happen in the next two years, maybe next year. So next up is artificial superintelligence. Right. So think about an AI with an IQ of 500 and up. I want to throw one additional scenario in the mix here. So it's one thing to talk to an AI with an IQ of 1000 or 10,000 and not understand a word it says, unless it's kind enough to talk down to you like to a child. It's another thing that not even in theory, you can understand how its mind works.
But it's a third thing. If that AI would develop sentience, right? And sentience is defined as having subjective experiences, um, feelings, emotions, um, and, and some would say that's not possible in silicon, but some may be the same, some set the same about the current level of AI that we're seeing and said that it would not happen for a thousand years. So we'll, um, we'll see what what happens. We don't we don't know. Um, but Elon made this distinction between statistical and generative AI, right? We call it often left brain AI. And when I talk about decisioning and next best action that Vivek was mentioning earlier, that's really still all tractable. I mean, it uses massive machine learning.
We call these adaptive models, but any decision based on it will still be tractable and explainable. Um, and now of course, we can combine that, complement that delightfully, I would think, with the creative, with right brain, left brain, right brain. And we saw yesterday when in keynote, Nicola demoed the the blueprint for customer engagement. We saw how that works. But I want to get a little bit under the hood of that. So we have a strategy agent that would sort of put out the high level outline of what needs to what needs to happen, you know, what milestones, what kind of actions or offers or messages or experience do we need to create. And that would be handed to a marketing analysis agent. And that agent would say, okay, if that's what we need, this is the actions that we would need and all the experiences in the different channel based on obviously, um, you know, all these insights that it that it had. And then next would be the creative agent.
And the creative agent does the obvious thing, right. Create the content messages text. And then once we have that whole thing ready, this is what you saw yesterday. It will be sent to the agentic version of the customer decision app. It will take that all in. And this is where it will run. This is that harness, right. Because this is an enterprise decision engine calculating next best actions for customers. Right.
So we have all the creative. But now we can run it fast and we can run it safely. And let's for a second put a finer point to this, because I don't know how that is with you, but I sometimes get this question, why wouldn't I if I have all this, um, statistical and generative AI, why wouldn't I ask? Um, a language model. They can reason, they can reflect. Why wouldn't I ask it what the next best action is? And the reason for that, and Elon alluded to it, is that, um, that's just not very safe. And it's also not very practical. So first of all, if you did it this way, it would be very risky.
Um, I mean, those language models are amazing and I'm a big fan, of course. But, um, you know, they think by association and that is their power, but it's also their weakness. So if you did it like this, it would really be not repeatable. It would be not transparent. It would become a liability. Also it would be too slow. Um, and there's a better way, right. What we could do is have all of that cleverness. All these agents really work like a human team does in your organization's work and operate on the customer decision app.
So obviously the blueprint your agent, third party agent, human agent, if you want to go sort of traditional, they operate, you know, the customer decision app. And if you did it this way, you had all the power. But if you do it this way, it will become predictable again. You know the risks, you know the trade offs. And it will also be computing in milliseconds. So we think this is the the better way. So that's what I would like to leave you with, with truly, you know, left brain and right brain, statistical and creative I think absolutely. It's the best of both worlds. Thank you very much.