Artificial Intelligence (AI) is notoriously hard to pin down, spawning as many doomsayers as rosy-eyed tech utopians. With biased datasets and skewed algorithms already compromising the objectivity and precision of many AI systems, recent legislative hearings on the use of Facebook and Twitter by Russian hackers to influence the US 2016 presidential elections only compounded the noise and apprehension about artificial intelligence and automated systems.
How can business leaders ensure a responsive data governance regime and ethical use of AI for their companies and customers? On November 2, Pegasystems, Cognizant, and TOPBOTS hosted the #TransparentAI Twitter chat to have an open discussion with leading executives on AI adoption, governance, transparency, and ethics.
Here’s a recap of the discussion:
How are customer experiences improved by AI right now? What are the best examples of AI-enhanced customer experiences?
Most expert respondents mentioned hyper-personalization, faster issue resolution times, and contextualization of services through targeted content and product recommendations.
“If you do it right, AI lets you scale your relationships – not just the amount of noise you generate,” tweeted Matthew Nolan, Product Director at Pegasystems. He cited the case of the Royal Bank of Scotland (RBS) where AI-assisted customer service helped increase mortgage retention by 20% and drive web-to-branch customer transitions from zero to 40%.
For Pegasystems colleague Ying Chen, Head of Product Marketing for Platform, AI’s role in the Internet of Things (IoT) enables businesses to safely ditch even a proactive approach to customer experience in favor of a pre-emptive strategy.
Benjamin Pring, author and director at Cognizant’s Center for the Future of Work, cited the case of HotelTonight where AI almost single-handedly assists customers in booking hotel accommodations.
What are the biggest challenges with implementing AI to improve customer experience?
“The biggest challenge isn’t the tech, like you’d think – it’s the people. There’s ALWAYS organizational inertia,” quipped Nolan. Vince Jeffs, Pegasystems’ Director of Product Marketing and Strategy, AI and Analytics, agreed, citing organizational/business alignment as one of three key challenges. Completing his list are 2) actionable data and 3) agile action-oriented technology. Meanwhile, digital strategist Hessie Jones gave a more corporate-sounding synonym for the inertia: siloed thinking.
“Also consider siloed thinking where customer service was ONLY the job of customer service reps. Now it's a data/marketing issue,” she said.
For Pring, the main challenge similarly lies in the human aspect of AI. “Mixing design thinking, app development, and AI is 100% non-trivial. When the talking is done, it's all about skills (as usual).” Chen was more specific: “Knowing the right questions to ask. Asking it in a way that allows insights incrementally.”
On AI’s inherent hunger for data, Pegasystems’ CTO Don Schuerman made an important distinction between the quantity and quality of information. “The other challenge is getting past the data hump. You don't need massive data sets to be effective. Just CLEAN data.”
How does GDPR make implementing AI more difficult?
The General Data Protection Regulation (GDPR) aims to reinforce and simplify data protection for individuals in the European Union, creating a regulatory regime with significant implications beyond the EU.
But, many executives embrace the idea that “GDPR means transparent AI. [It is] even more important than ever and pushes blackbox AI further away into the sci-fi future,” said Pring. Jones agreed, tweeting that GDPR will be the ultimate test of privacy by the design and can mean a “higher chance of bias detection, implications that are implicit or explicit within algorithms.”
Compliance to GDPR has its own set of challenges though. As Nolan mentioned, GDPR forces you to show WHY you made certain decisions and be accountable to them. Adam Porter-Price, a Senior Manager in Emerging Technology Strategy at PayPal cited research that states GDPR “lacks precise language and contains broad exceptions, rendering it largely unenforceable.” He was also concerned that the generic regulations might prevent “a wide swath of algorithms currently in use.”
What concerns should customers have about AI in business?
“This may be controversial, but I don't think consumers should be afraid of AI. They should be afraid of ‘bad’ AI,” Schuerman said. “The question is – can brands use AI in a way that makes the customer experience better, while maintaining privacy and honest transparency?”
Nolan concurred, citing the difficulty of scaling business operations using human resources alone. ”Humans can only scale so far... they can't flex to millions of customers and billions of interactions … I think consumers are only afraid of the AI they see in movies... but what's already in the market... I think they LIKE it.”
For consumers, identifying “bad AI” may not be that easy though. As Jones explained, “AI is still nascent, I would argue. It is NOT pervasive and no one system has gotten it 100% right. We are just starting.”
How do brands ensure their AI is transparent & behaving ethically?
Some brands don’t behave as they should, unfortunately. As Schuerman described, many businesses try to pass off AI as humans (i.e., when bots don’t ID themselves as machines), opting to conceal from consumers how they make decisions.
“They use AI to over-engage, rather than for the things that matter. They use AI simply to sell more, rather than improving customer experience,” he warns.
Understanding the data being fed to the AI system and maintaining transparency of the algorithms that process the data are the main steps to ensure that AI is unbiased, as explained by Jeffs. “A very good friend in AI told me -- only good data will define a good AI model.” The key, according to Chen, is to establish governance. “Think about the role of a COE (Center of Excellence) for AI,” Chen added.
For Jones, true customer experience means a deep understanding of customer intent or motivation. AI, however has yet to fully mature in this area. “More governance -- but it has to be built within an AI framework so it polices itself – at the speed at which AI travels … The trick is eventually for humans to train for ethics… if AI is to truly mimic the brain,” she explained.
Nolan gave a simpler conclusion: “You need to hold people and systems ACCOUNTABLE.”
Which industries are faster or slower to adopt AI?
As expected, business sectors differ widely on the rate at which AI adoption takes place in their representative organizations. As Nolan noted, AI development thrives in industries where consumer-facing transactions and interactions are strong. These include banking, communications, and a few other sectors. In contrast, Nolan cited industries with less direct contact (such as insurance and consumer packaged goods) to lag behind in terms of AI maturity.
“The slowest, I think, are large B2B firms with small amounts of customers. They see little incentive to adopt fast.”
Pring partially agreed. “Financial services represent the most aggressive/successful user of AI. Every other market [is] about equal but a long, long way behind,” he said.
How should businesses be staffing around AI and data management?
Thought leaders in AI predict a milestone where advanced AI can simulate human creativity – able to design, build, and manage other artificial intelligence systems. But we have a long way to go before reaching that turning point. In the meantime, human intelligence remains the key input in AI. Businesses planning to have a robust AI strategy will need top talent to get things done.
In particular, Jeffs enjoined businesses to hire people who understand machine learning and data preparation. “To win with AI in customer experience, you need lots of good data & chances are you have it,” he added.
Pring agreed. “Beg, steal, borrow (or source) AI talent anyway you can.” Meanwhile, Chen suggested finding people who possess not only data science skills but also a strong understanding of your business – professionals who ask the right questions and conduct the right tests.
On the other hand, Schuerman believes empathy will be a key talent differentiator in the tech department, calling on businesses to set it as an imperative for hiring. “Empathy will become a major job requirement. Need to hire people who know the biz, get the data, and have empathy … make sure they aren't just data geeks or business experts, but also bring ethics & empathy for the customer.“
Should government have a role in the oversight of AI and consumer data?
At a time when net neutrality serves as a unifying battle cry among advocates for a freer Web, the demand for a more active government role in the oversight of AI and consumer data provides a strange counterpoint. Jeffs described the phenomenon:
“Government has a role in consumer privacy, and striking a balance between oversight & overreach is always tricky.”
Nolan and Jones both concurred, with Nolan seeing the need for government to protect constituents and Jones highlighting oversight and the unwanted result of government inadvertently contributing to the problem. Nolan, however, emphasized that directly regulating AI is not the answer: “You need to legislate outcomes, not approaches.”
Pring accepted the rationale for government involvement, but with a sad twist: “Short answer, yes. But don't hold your breath. The US Congress just figured out what cookies are, 20 years after introduction.”
ABOUT THE AUTHOR: Mariya Yao is CTO and Head of R&D at TOPBOTS, a strategy and research firm for applied AI and machine learning. You can read more by Mariya on www.topbots.com/author/mariya, and follow her on Twitter @thinkmariya.