Skip to main content

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Choosing between Opaque AI and Transparent AI

Choosing between Opaque AI and Transparent AI

Rob Walker, Log in to subscribe to the Blog

AI has reached the place in its evolution where it’s very useful without being costly. Once any new technology achieves this point, a call goes out in the industry for the creation of a managed approach to using it. AI is no different. Now that it’s used in machine learning, decisioning, and personalization, developers need tools to control its use. Quite simply, if you’re going to rely on AI to liberate your organization from slow, manual business processes, you need a control switch—one that lets you direct how your AI operates. While this solution requires additional nuances, developers need a capability that functions as an On/Off toggle switch, and the integration of this capability is a no-brainer.

Opaque AI vs Transparent AI

Many developers of AI technology are considering classifying it into two categories: opaque and transparent. While we at Pega haven’t standardized on this interpretation of AI, we see this classification happening in the industry.

Opaque AI is a “black box” system, where the technology can’t explain itself or why it’s operating in a certain way. This doesn’t mean opaque AI is ineffective—instead this means it’s higher risk and, possibly, a liability. Transparent AI is when the technology is required to explain its decisions and how it reached them. It can explain exactly how it's using data to make decisions or predictions.

The decision that transparent AI is the way to go might seem obvious. What CEO would want opaque AI running through their company’s systems? However, there are situations when an opaque design is acceptable, or even preferred. For instance, in areas like marketing, you can see how some CMOs may not be concerned. They may find that insights and decisioning provided by an opaque AI is boosting the return on their marketing budget—and that's good. If that opaque AI selects the locations for their billboards, the TV shows for their commercials, or the websites where their ads will play, they’ll be happy so long as the results are positive.

A requirement for transparency doesn’t come for free. Because the need for transparency is a constraint on AI, opaque AI may prove more effective. That means there’s a trade-off to be made. There’s no reason why AI shouldn’t be used in highly regulated industries, like credit risk. Used properly, AI can improve the accuracy of these services and result in fewer errors. However, since some businesses (such as banks) need to explain how they’re achieving these operational improvements (in the EU, this will be a legal requirement when GDPR comes into effect mid-2018), using opaque AI will be problematic. It will depend on the industry and the types of decisions whether the price of transparent AI is affordable. For instance, if an opaque AI would prove superior to human doctors in diagnosing a medical condition, the price of transparent AI may be paid in lives. Even in the aforementioned banking example of credit risk, opaque AI will likely mean fewer borrowers in debt and less risk to the bank. But at the same time, the logic behind accepting or rejecting a loan application will be less understood. A trade-off.

Managing AI technology

As a rule, businesses must be able to control where their AI can be allowed to be opaque and where they need to insist on it being transparent. And what if the situation changes, or new legislation like GDPR comes into effect, and transparent AI is required? How can you control the technology without a costly, major teardown? How can you make sure that you can trust your AI?

What we at Pega propose is something called the T-Switch, where the “T” stands not only for Transparent, but also for Trust. If the T-Switch is toggled to the opaque setting, then anything goes. If the T-Switch is toggled to transparent, then the AI must provide information about its behavior and anything opaque will be actively blocked from execution. Because we do not live in a world where “Either/Or” is a reality, in reality the T-Switch will be a slider ranging from 1 (very opaque) to 5 (completely transparent).

Our work with AI is not just about regulation—we’re taking an active part in the discussion about AI that easily veers into ethics and morality. That's why Pega is committed to implementing this switch in all our AI software. We give you the choice of where to allow opaque AI and where to insist on transparent AI.

Looking forward in AI

We’re already seeing AI that assists human judgment in call centers, optimize the customer experience in unassisted channels, and effectively makes suggestions that add value to the bottom line. At some point, we’ll see AI take on the burden of the work, but this will need supervision from people to make sure the suggestions it makes are appropriate, especially if we allow the AI to learn continuously and adapt on its own.

I think there is huge potential for AI to augment human judgment. As it is, this is just extremely smart business software that will drive unprecedented outcomes. With the controls I've just outlined, those outcomes will not just be unprecedented, but also safe.


To learn more about transparent and opaque AI, watch the full video of my keynote presentation at PegaWorld, or read my Whitepaper “Artificial Intelligence in Business: Balancing Risk and Reward.”

Discover how Pega helps businesses mitigate risk with the industry’s first AI transparency switch. Read the press release here.

Tags

Product Area: Platform Solution Area: Customer Engagement Solution Area: Customer Service Topic: AI and Decisioning

About the Author

Rob Walker

Share this page Share via x Share via LinkedIn Copying...
Share this page Share via x Share via LinkedIn Copying...