Businesses are adopting artificial intelligence (AI) at an accelerated rate. But where are they in their AI deployments? How are they using it? What types of results are they getting? And how are they ensuring AI applications and data sets are all fair, transparent, robust, and even empathetic? Pega’s recent webinar, “AI for the better,” examines these themes.
Hosted by Sam Charrington, founder of This Week in Machine Learning & AI (TWIML), the webinar features guest experts who share their insights on how AI, used responsibly, can create smarter, faster ways to get work done and improve outcomes for businesses, workers, consumers, and communities. Here are our takeaways from the conversations.
Generating value from responsible AI
Barbara (Barb) Wixom, Principal Research Scientist at the MIT Center for Information Systems Research (MIT CISR), has spent the last 30 years studying how organizations monetize data. In her most recent research, Barb studied 52 AI projects in businesses across a variety of industries to analyze the importance of algorithmic understanding and trust and how that translates to creating value from data in the context of AI.
Barb explains, “You know, when we talk about AI as a pervasive phenomenon; when we talk about data monetization, we look at generating value from data in three ways from using AI to improve.” These include:
- To improve: Use AI to optimize processes and products.
- To wrap: Use AI to create analytics features and experiences to develop higher-value propositions for products.
- To sell: Create whole new products and solutions using AI.
Forty of the 52 projects studied focused on improving. “That makes a lot of sense,” says Barb. “Improving is a phenomenon where you're creating value inside your organization for yourself, and it's a bit safer to focus AI internally, as opposed to using it in external ways. We’re in that first phase of value creation when it comes to AI.”
So how can businesses scale their AI from internal improvements to generate more value? Barb advises:
“Our research identified five capabilities that you need for value creation. You need data management, you need data science, you need data platforms, you need customer understanding so that you know what you should be working on, and you need what we call acceptable data use, which is governance, but it's beyond compliance. It's governance of both compliance as well as ethical types of oversight. So all five of those capabilities are required for value creation, and AI even pushes the boundaries for all five of those. And so every time you execute and deploy an AI project, you have the opportunity to build out those capabilities for future subsequent projects.”
Also key to value creation are recontextualizing AI projects for new purposes and building trust through AI explanation. In other words, scaling out within the organization to create new capabilities and being able to explain to employees, customers, and regulators why AI and data is being used around an AI-based solution.
For instance, the Australia Tax Office (ATO) uses real-time AI-based analytics to make it easier for taxpayers to file taxes, for auditors to review tax claims, and to encourage productive tax claims behavior. If their system indicates that an expense on a taxpayer’s form might not be right, they nudge the person filing the claim – in that moment – to double-check the entry. The result of the nudging amounted to $113 million in readjusted claim amounts, helping to close the tax gap. The success of the solution and the value to the ATO and auditors is apparent, but the ATO also takes time to explain to citizens how the solution benefits them through time saving and proactive error correction, and that AI explanation has been important for building trust around their program.
How to ensure you’re using AI responsibly and effectively
Ethics and governance around the responsible use of AI is essential for building trust and using AI effectively. As global AI and ethics advisor, Elizabeth Adams, defines it:
“Responsible AI is a leadership practice between technical and non-technical leaders. It’s really built around governance of how AI is developed and designed. And it includes policies, frameworks, processes, and procedures that cascade across the organization so that everyone is aware of what are the protocols and the guardrails that should be in place when you're developing AI.”
That “everyone” is not just business leaders, employees, and customers. It should include a wide range of stakeholders, such as vendors and the greater community, as well – even partnerships with academia or non-profits that represent groups that might be harmed by an AI application. “Because if you don’t include a broader group of stakeholders,” explains Elizabeth, “you could be missing some perspectives that would be extremely valuable in how you design your technology.”
But what is the best way to start building out your AI governance, and where does that responsibility sit within a business? In her perspective, responsible AI should be an organizational priority. Start with a team dedicated to responsible AI and explore how AI ethics and responsibility can fit within the culture of your particular organization. The priority needs to flow from the top down.
“The very first thing I always advise companies on,” says Elizabeth, “is to create AI ethics principles, or at least start somewhere to say, ‘We are attempting to, we are looking into, we seek to be transparent, we seek to be fair and equitable,’ because it does take time for an organization to unpack their entire AI lifecycle to figure out where they might need to prioritize efforts and budgets first.”
How responsible AI is impacting business and affecting change
For the organizations that are already going all-in on AI, how are they using it and doing it responsibly? Pega’s Director of Decisioning and AI Solutions, Peter van der Putten, described the sweet spots for AI application as: getting closer to customers, anticipating service issues, and improving the efficiency and effectiveness of operations.
“I think in getting closer to your customers, you see industries where companies have direct relationships and know their customers, like banks, insurance companies, and telcos. They have a lot of data about their customer’s behavior. On the side of running your business better, you have industries where they have a lot of processes that are kind of similar and repeatable – lots of data around supply chain or logistics or any of those types of process-heavy industries.”
More and more, organizations are looking at their top-level strategic objectives and thinking about how they can use AI to transform at the core of their operations and be competitive – to reduce churn, for example, or using AI-based automation to work smarter and faster.
“Really look at your top-level company objectives,” advises Peter. To determine where to start with AI or how to best apply it, ask yourself, “What are your biggest problems for the company as a whole? How are you going to remain competitive?” Then look within the organization to find C-level leaders that are aligned with that goal.
Also consider the risks versus the rewards of AI. The issues of responsibility and ethical use cannot be decoupled from the application of AI. In fact, they are drivers of success.
Peter explains, “I think if we do a better job at ethics, the rewards will be higher. It’s the only way to have a long-term, sustainable approach.” That means incorporating measures to address concerns like bias avoidance and transparency, but also doing more.
“It’s not enough to build an AI that’s bias free or transparent,” says Peter. “The AI should work towards the goals of the customer… the only way people will trust AI is if it’s being used towards the benefits of the customer and the consumer.”
Want to find out more? Watch the webinar replay and get the full story!
Work smarter and driver better outcomes.
Learn how your business can transform with the help of AI.