Sunday, 14 June 2020 22:23

In conversation with: Rob Walker, Pegasystems Featured

By

Rob Walker, vice-president, Decisioning & Analytics at Pegasystems, brings a wealth of knowledge in the AI space to our conversation.

Note, this conversation took place just prior to the coronavirus shutdown in Australia. Also, some minor edits were made to improve clarity. Pegasystems may be accessed on the web here.

iTWire: What brings you to Australia?

Walker: Originally, we had a customer engagement summit planned, where we invited a lot of our Australian and APEC customers. But like many others, we have to cancel that because of the virus trouble. So currently I'm visiting a lot of our customers.

iTWire: So, ethics in AI. Define it!

Walker: Yes. Well...

iTWire: I don't think anybody agrees!

Walker: No, well, we'll discuss how that works. But in essence, it's obviously the idea that with AI [you're] really dealing with a lot of customers and [those] customers making 'customer decision.' You just need to reach a moral obligation as well as, really sane business policy, to get the ethics in, because, there's tonnes of opportunities for AI to go rogue. Right... these algorithms are self-learning - there are many examples like Microsoft Tay, for instance, if your recall, the chatbot that they launched, which was really, really a laudable effort, right? And it was, instantly corrupted, became racist, and it was really bad.

But anyway, that's the thing that a lot of the brands, the big companies that we talk to, especially regulated industries, but regardless, any large company is really worried about. And the thing is, you can't just say, well, let's not do AI, right, because it's incredibly effective. So you really need to get the ethics in AI; make sure that it behaves properly.

iTWire: Speaking of AI machine learning, I saw an excellent example of it not doing what they expect, I've shared this story with a few people... But somebody had programmed a ML system to recognize foxes from dogs. And it was doing about 85% correct. It was doing quite well. And somebody asked, what is it actually doing? Because they'd given it lots of test images and it said, OK I know what I'm doing. They eventually worked out that if it saw snow in the background was a fox. If it didn't see snow, it was a dog.

Walker: You even have an example of that where that happened for the for the military, from satellite images, trying to see tanks. And it turned out that all the exercises that they trained it on were all in the morning - so, it was foggy. They were tanks; otherwise they were not! But that is exactly the kind of thing. So that seems to be silly mistake, but this could be, racist, misogynist, ageist, it can be lots of different things, and especially with the algorithms that you have now, which cannot explain themselves, right. So, it's even, quite a feat to even figure out if it was looking for, the clouds or the fog or the snow. And that's [the] risk, we need to really get off the table.

iTWire: So, does that mean that any bias in that environment is the fault with the training set?

Walker: No, I think, because a lot of the training set will definitely be biased. I mean, any company that is not having a random representative set of customers will be biased, like, a bank, for instance, usually has customers that, pay the bills and the mortgages and therefore, that's not a completely statistically, representative sample. So, it's not the training set, but it is the responsibility of... and this is one of the things around ethical AI, but it's just a part of it, where the data scientists need to take responsibility, whether... currently there's a little bit of a tug of war between the data scientists and the AI because people think like, "oh, the AI is going to do what we're doing..." In fact, the scientists (and humans in general) should really take control over AI, but also be responsible for the outcome. Why? Because it's not rocket science, Detector bias. We just need to do the work.

iTWire: Exactly. Because we saw the same thing with those endless discussions about facial recognition systems that are not good with people who aren't Caucasian... and aren't male.

Walker: But, frankly, they're sort of the bias detection. I think there's a few basic things that you need to see. So first of all, bias is much easier to detect if the algorithms can explain themselves.

iTWire: Yes. As long as you can realise that you're introducing bias.

Walker: Yes, but there is this big, groundswell around these very opaque models, which are very powerful, so that's what we have to keep in mind. So, so both from a business perspective, these models can be really effective, but also from a customer perspective. Like one of the examples I always use is in lending. If the algorithm wasn't biased, but you couldn't explain the job, but it actually works. It's not just good for the bank, that they don't give out loans to people who don't pay them, it's also really good for customers. Yeah, otherwise you get them into that, and we've seen what happens. So, in principle, really sophisticated algorithms are not evil. But if they cannot explain themselves, there is an issue, and that's where the ethics come in - we feel strongly that there needs to be an AI policy in place in these large organisations where you say, okay, this this kind of algorithm you can use for this kind of purpose, but not differently. And, and that policy needs to be enforced - it needs to be at the heart of the AI system. So, for instance, you may want to go all out with deep learning algorithms to determine what's the best background image, for an ad maybe. It's a very different thing and determining if you're going to get a $25,000 loan. That's where you probably (not just from an ethical perspective, but also for the regulator) need to be able to explain your decision process.

iTWire: When you said 'background,' I was thinking, even when you're doing image training, and coming back to my snow example, you can't just be conscious of the foreground. But you need to be conscious of the environment in which the identified object or behaviour resides.

Walker: Yes. you need to know exactly what they're looking at. But for instance, if that model had been a transparent model (one of the algorithms that can explain themselves by just many examples), it would have just said, "Oh, listen, the first thing I'm looking at is the background or is the colour white" and people would say, "Okay, why are you doing that," so that would have been a clue. But if you don't know, then... The other thing about ethics and bias is also that the golden standard should not be some sort of platonic ideal of a decision in saying, "that would be perfect." - the golden standard is human judgment. . So I always say, Well, sure we can tell foxes and dogs apart, but people that are going to repay their loans or not, or who are going to respond favourably to do something, maybe even a medical treatment... That's a whole different ballgame. And AI has to be better. It doesn't have to be perfect.

iTWire: So where should the ethics come from? What is the basis of the ethics?

Walker: The ethics in this case... there are superficially two schools. One school is like: AI will develop ethics, maybe even morality, but I think itself, because it will just look at things and will be reading books, it will study Wikipedia on the internet, and it will just figure out what is good and evil and those kind of things.

I don't think we should be waiting for that. Because currently that's not the case.

iTWire: That's a little way off!

Walker: Yes, it's an interesting case. I'm sure there's a lot of researchers trying to figure that out. Because, let's not forget that (depending on what school of philosophy you are or what religion) humans themselves, develop a sense of ethics, most humans do.

We don't think you can wait for AI to develop a sense of ethics itself, it needs to come from the humans. So, we strongly advocate a hybrid model, where you do the AI [inaudible] do all sort of classification, and all the probabilities and recommendations. But it's all embedded in a very prescriptive ethical framework of what should actually be done.

An example: we have this thing, that a lot of our customers are using, called 'next best action.' It asks, "what do you need to do right now with this customer?" What is the best action to take - it's AI driven, but it's embedded in In an ethical framework, and for instance, it could say the AI could say, hey, this person is totally in the market for this loan, would be receptive to it, would also be eligible, is over 21. whatever the rules are, but actually already has a lot of debt and we really shouldn't be adding to it. That's a rule that you can just put on top of [things]. But it's part of one system. So in the end, the system would say, "okay, propensity: high, eligibility: check, you know, but suitability. Now, this is not true - we can sell it but we shouldn't." And a lot of our customers that we work with, have exclusion rules like this. Another example, here locally the CBA is using our system, but they are very empathetic to people, that live close to the forest fires that were here, where they would say, "Okay, well if you're really close to that area, this is probably not the time to remind you that you're late for your mortgage payment. Hey, listen, we understand you'd better take care of your first priorities and we'll talk later... or how can we help?" That's ethical, but the rules come from the humans - but it's part of one system, where the AI may have said, "Hey, this person is late. That's a risk from this person." But that's really a hybrid take - technology and humanity in one system.

iTWire: We always come back to the trolley car problem – a runaway tram in San Francisco, barrelling down the hill. You're standing at the bottom and you can either let it go into the water, or you can switch it to another track and have it hit some people.

Walker: So that's really the morality thing. This is where the golden standard comes in, right? Because we expect, "Oh my god, what is this AI going to do?" But first of all, what would humans do? Let's see if it's better than that, and honestly even from a philosophical point of view, you can be Kantian, right? It can be 'what is actually the right thing to do,' you can have utilitarian approach... So it's not even a given that we know what is [the] right [thing] to do.

iTWire: If [we did], there will only be need for one philosopher!

Walker: Yes, yes. That is exactly right!

You can say, well, it's really cool. That person sacrificed himself. But, maybe that person has, three young kids and really shouldn't have. But regardless, I don't think currently we need to be trying for AI to solve that. That's a very interesting thing, but given that AI is already, so pervasive - we [Pegasystems] do business with these large brands, they make 10s of millions, sometimes hundreds of millions of decisions every day, and it's the whole way up, so they couldn't even check them. So you need to be sure that the system that is making those decisions is making decisions that meet your ethical guidelines, and therefore, you need to have the controls the AI policy and AI controls in place to make sure that really happens, because there's no way that somebody can check 100 million decisions.

iTWire: But we see even more than those hundreds of millions of decisions per day, when with high frequency trading - they're making thousands of decisions a second.

Walker: Exactly, yes, they do that. But apart from the ethics of the whole thing, that's not a customer decision, It's not just like a liability to the company, it's also part of the brand. So I get asked a lot of times, "Hey, we actually want to be sure that this AI thing that you're telling us about, even if it's clinically really, really good at, increasing our profitability or our revenues, we want to make sure that our brand values are reflected. We don't want to be really good for one year, and then we get the bad press and people say, Well, you know... and then it all crumbles down." So, it's also a really good business decision to do that. And, and I would even... so this is about ethics, but I would even insert empathy into the equation. So we really try to make sure that empathetic considerations like a bushfire consideration as an example. So if somebody is in really adverse circumstances, should you be selling them? You should really be reminding them that they are... no, that's not what you would do to a friend. Now you don't have to be friends, maybe with your customers, [but] you definitely should treat them equally.

iTWire: At some point in our future, we're going to have some fairly pervasive societal AI, for want of a better term, and maybe at some point, this AI decides to accuse me of a crime. So I arrived in court. Who is my accuser?

Walker: Yes, it's a good point, I think, I'm not a philosopher per se, but I would say this case, it's society itself. If that's the AI, they want... Can I just double down on that example? What if you were innocent, and there was an AI judge, and nobody understands what it does, but it's been more effective than human judges, what would you like the person to do? Or let's take this into medical treatments. This is an example I hear all the time because people say, well, FDA, it [a medical AI] needs to explain itself. We can't have all these weird algorithms that we don't understand. But what if it's doing a medical diagnosis for cancer? And it's actually better than humans are.

iTWire: We're seeing that already.

Walker: Yes, But what do you want? Do you want it to explain itself and be as good as a human doctor or not explain itself and actually cure you? That's the slippery slope of AI with a promise of really good decisions that you may not be [able to] understand. It's like you're talking to a superior intelligence off the guy who lost at Go, he thought he was playing a god. And he could not even imagine that he would lose because he's so good and he was obliterated.

iTWire: And I heard his comment afterwards, he was saying the machine was making alien moves, they were not moves that a human would make.

Walker: I have seen it at a different scale. I've seen algorithms come up with solutions to problems, especially these evolutionary algorithms that adapt based on evolutionary principles. So you [for example] you're simulating air pollution. for a hundred million years, but it's [just] 10 minutes on the computer. And it comes with solution that really makes you say... as that Go player said, it's alien.

So that's why I felt in my current role as the head of AI for Pega that we need to make the controls really part of the software. You can't have it outside of it, because somebody will... you need to make sure that if you take these next best action decisions, that the AI that is gobbling up all the probabilities and the risk and the opportunities and the recommendations is really embedded in rules of ethics. And you can really see, 'what does this actually do?' And that's also where the bias comes in. So in your oldest combinations of rules and policies and AI, what is that combination actually going to do? Because I don't discriminate against AI. Human decision making shouldn't be biased either. So it really doesn't matter where the bias is coming from, or the decisions are coming from you just need to objectively judge the outcome - and that needs to be built into these engines that are touching customers 100 million times a day.

iTWire: The other interesting thing is situations where AI does have to interact with people but eventually won't and... my main example is self driving cars. Right now, a self driving car has to be incredibly well built, because it has to deal with people. At some point we're going to ban people driving their own cars. And the AI gets a lot simpler.

Walker: Yes...and they can also talk to other cars. Say "Hey, give me half a second here [I need to cut across you]."

So that's to our earlier point... so first of all the cars will be a lot simpler and everything will be a lot more... if only we were looking for transport, not just looking also to drive a car as an experience (which a lot of us like). it would have been solved already obviously, because it is absolutely the combination of AI and humans. Not to mention that obviously the moment the self-driving cars take off we will have humans that will just cross the road. Say, "let's see what happens" [if I jump in front of a self-driving car].

iTWire: They'd only do it once!

Walker: Yes, maybe!

But it is true now that AI is currently some of the friction we're seeing. Not that there is that much money actually. I think it's going remarkably well for the scale of AI already, because people think oh, "AI is coming." Well, yes, it's coming, in the sense it will go a lot further than this. But it's there. You can't talk to any major company that is not using AI, or soon that will be the case. But the friction is really where AI and humans work as well.

As I said, it's not considered one system, we have the AI and you have the humans, but it's all about decision making in the end, and all the control systems should apply to both.

iTWire: So effectively, AI is simply taking the human decision-making process and making it faster?

Walker: Yes, faster, but also better. But faster is one thing. So that's where you get to the hundred million [decisions per day], but also really much more effective. Because not in all cases, but in many cases, AI can just take into account a thousand data points versus a human (especially if it's a fast decision), maybe five, maybe they're really smart people and can juggle seven things in their mind and make that [decision] with all their biases and their childhood fears and their emotions and their hormones all collide to make a decision. And it's not like all that awesome. Not that we're stupid, but it's very inconsistent. It's like asking 100 people for a decision, and there will be 50 different opinions.

iTWire: While you're speaking, two books came to mind. First was Kahneman's "Thinking Fast and Slow," which is the heuristic that we use - the fast thinking is a heuristic we use to say that 'this' is the answer, without really thinking about it. And the other book, which you may not have heard of, is called "We the People" by Peter Temes and Florin Rotar. Its absolutely about ethics in AI - it's a really, really good read.

Walker: I think that's it. Absolutely because not just Thinking Fast and Slow," there's also "Blink," you know, you have these this [inaudible] book. And it's the, same principle. It's like, "Hey, you should trust your gut feeling" - I'm not a big fan of that.

iTWire: I'm only mid-way through Kahneman's book and I'm thinking the more I read it, the more I'm thinking how wrong you are [Kahneman] about so many things.

You know, obviously, gut feeling is really good when you live on the savannah and something moves. And you just get it right. Like a built-in evolutionary rulebook. In general gut feelings are really, a combination of a lot of biological processes, which amazingly work most of the time. But I think it's especially the lack of data - people are really bad, in general, in knowing what they don't know and AI at least, possibly, could have a sense of that because it's more and more mathematical.

So at least it could say, well, this is what I think. But, I haven't looked at these other thousand data points. I didn't have time because I only had like a millisecond. So, I couldn't take into account these other items. That kind of thing is, I think, is an important consideration.

iTWire: Well, we've, reached our allotted time, I think we've covered the topic quite well, so thanks very much for your time.

Walker: Yes, thank you.

Read 2500 times

Please join our community here and become a VIP.

Subscribe to ITWIRE UPDATE Newsletter here
JOIN our iTWireTV our YouTube Community here
BACK TO LATEST NEWS here




ELASTICON SYDNEY 2024 LATEST ADVANCEMENTS IN GENERATIVE AI

On 20 February, keynote addresses from NAB, Canva, AWS, and Google Cloud, among others, will feature at ElasticON Sydney 2024.

This event will explore the latest advancements in generative AI

The one-day conference, hosted by leading search analytics company Elastic, will include networking drinks, hands-on labs, technical sessions and a stellar line-up of keynote speakers from finance, technology, and government e=sectors.

ElasticON Sydney 2024 promises to be an enriching experience with a comprehensive exploration of the latest developments in security, observability, generative AI and their real world applications

Don't miss out on this opportunity to network and find answers for what's next from your industry peers and leaders


Register for ElasticON Sydney 2024

REGISTER HERE!

PROMOTE YOUR WEBINAR ON ITWIRE

It's all about Webinars.

Marketing budgets are now focused on Webinars combined with Lead Generation.

If you wish to promote a Webinar we recommend at least a 3 to 4 week campaign prior to your event.

The iTWire campaign will include extensive adverts on our News Site itwire.com and prominent Newsletter promotion https://itwire.com/itwire-update.html and Promotional News & Editorial. Plus a video interview of the key speaker on iTWire TV https://www.youtube.com/c/iTWireTV/videos which will be used in Promotional Posts on the iTWire Home Page.

Now we are coming out of Lockdown iTWire will be focussed to assisting with your webinars and campaigns and assistance via part payments and extended terms, a Webinar Business Booster Pack and other supportive programs. We can also create your adverts and written content plus coordinate your video interview.

We look forward to discussing your campaign goals with you. Please click the button below.

MORE INFO HERE!

BACK TO HOME PAGE
David Heath

David Heath has had a long and varied career in the IT industry having worked as a Pre-sales Network Engineer (remember Novell NetWare?), General Manager of IT&T for the TV Shopping Network, as a Technical manager in the Biometrics industry, and as a Technical Trainer and Instructional Designer in the industrial control sector. In all aspects, security has been a driving focus. Throughout his career, David has sought to inform and educate people and has done that through his writings and in more formal educational environments.

Share News tips for the iTWire Journalists? Your tip will be anonymous

Subscribe to Newsletter

*  Enter the security code shown:

WEBINARS & EVENTS

CYBERSECURITY

PEOPLE MOVES

GUEST ARTICLES

Guest Opinion

ITWIRETV & INTERVIEWS

RESEARCH & CASE STUDIES

Channel News

Comments