Pular para o conteúdo principal

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
state of ai

The State of AI in 2022: Trends, regulations, and ethics

Priyanka Raj, Faça login para se inscrever no blog

Are you fearful or fascinated by AI?

Artificial intelligence (AI) has been the next great digital frontier for quite some time. However, what was once seen as more of a fringe or niche technology is now fully integrated into our daily lives. Similarly, AI is being integrated across the enterprise rather than being reserved for special projects. As AI and machine learning continue to evolve – and they can evolve fast – enterprises need to constantly be assessing how to stay competitive and deliver better business outcomes. Many organizations have already started down this path. According to Gartner, by 2025, one in five B2B companies will leverage artificial intelligence (AI)/machine learning (ML) to connect to their buyers.

The growing development and use of AI technology in companies has also created demand for the democratization of data science. Meaning, it's not just the responsibility of data scientists; it now affects everyone in a business across many functional areas. Marketing and customer management are currently the primary users of AI technology, but as other functions join, organizations can realize more control and planning around using AI systems.

The two F’s of AI

There are business advantages to using AI, but also a moral obligation to ensure AI technologies and their data sets are fair, transparent, robust, and even empathetic. We explored this notion in Pega’s AI Week event “AI for The Better”, where Peter van der Putten, Lead Scientist and Director for Pega’s AI Lab, discussed the state of AI, 2022 trends, regulations, and ethics. As the head of Pega’s AI Labs, Peter often describes that most us all fall into two categories when it comes to AI, which he calls “The Two F's” of AI: Fear of losing control with AI and fascination of all we can learn and achieve from AI.

In his talk, Peter spoke about the fascination around AI. It has a lot of practical uses, that we all enjoy (like recommendations, for example) and for those developing new applications and programs with it, the possibilities of what AI can do are seemingly endless. Even with its prevalence, there’s still an air of mystery associated with AI, from how it learns to where it will go in the future, which keeps us all intrigued. But all mystique aside, AI is now a must have for organizations looking to scale operations and workflows, create better and more engaging customer experiences and connect to their ever-growing customer base. By leveraging AI and AI-driven technologies, businesses can finally reach their customer centricity goals.

At the same time, there is an increased fear around the use of AI. According to Accenture’s Technology Trends 2022, there is a real concern around “The emergence of The Unreal—a trend where our environments are increasingly filled with machines that are passably human. “Unreal” qualities are becoming intrinsic to the AI, and even the data, that enterprises are using. But bad actors are using it too—from deepfakes to bots and more. Like it or not, enterprises have been thrust into the forefront of a world questioning what’s real, what isn’t and if the line between those two really matters.”

To combat this fear, there is an increased need for ethics and regulation in AI to both grow and control AI simultaneously. With this massive growth comes an increased need for what we call responsible AI.

Fighting the fear with responsible AI

Responsible AI or ethical AI is the practice of building ethical frameworks around how an organization uses AI as well as into the AI itself. If you think about it, nearly every company has a set of values and a code of conduct which its employees must follow. If companies hold their employees to ethical and legal business standards, why shouldn’t they hold their technology to a set of clearly defined standards? Whether a company is developing AI-based technologies or using existing AI technology as part of its business, the company has a moral obligation to apply AI responsibly.

Despite some of the fears around AI replacing humans, the most successful AI comes from a partnership between people and machines. AI requires human-derived ethical frameworks in order to engage appropriately and even empathetically with customers. It needs tools and diverse data to learn from to reduce bias. It also needs to be transparent so that its decisions can be viewed and understood by others, when required. The only way that AI can positively impact customer experiences and business outcomes is if humans and AI work together.

So, what does it look like when AI and organizations using it don’t have ethical standards in place? It can be concerning for businesses and consumers alike. Take for example how AI usage can impact elections. Anyone who’s been online during election season and has the slightest bit of technical background has seen how bots can quickly spread misinformation across social media. Of course, those instances are intentionally using AI in a negative way to impact elections. But even companies that mean well can find themselves in hot water for AI gone rogue. For example, the use of AI in the criminal justice system has been challenging. What was introduced, in some cases, to try and eliminate human biases, actually perpetuated those biases.

In 2016, ProPublica examined Northpointe’s tool, COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). The analysis found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, or the likelihood to reoffend, while white defendants were more likely than black defendants to be incorrectly flagged as low risk. Similar biases are seen with facial recognition software. The fear of being misidentified by a machine vs. a person who you could reason with is justified.

Another less dire example is the story of Microsoft’s doomed Twitterbot Tay. In the article “Considerations of Responsible AI”, Pega’s Senior Director of Product Marketing for Decision Sciences Matthew Nolan, talks about this failed use case: “When it was first launched, Tay quickly gained 50K followers, and generated more than 100K tweets… but after using machine learning (ML) to study other Twitter users, within 24 hours Tay had turned angry and had to be taken offline.”

“In the case of Microsoft’s Tay, it wasn’t a matter of whether the AI was good or bad, it was simply reflecting the biases that it found. Instead, it was Tay’s lack of empathy that caused the problem – the AI didn’t conform to the standard expectations of society, and the individuals it was interacting with. The problem was, there were no guardrails in place to define the boundaries of what was ‘OK’ by Microsoft’s corporate standards. When you develop any type of customer-engagement AI, this is mission-critical; the AI has to understand not only what its audience needs (what is relevant), but what content is suitable for that audience, in that situation.”

But it’s not all bad. The good news is, many of the top organizations working with AI are committed to doing right and doing better with it. They are creating more diverse data sets and tools to help counter common challenges with AI like bias. Some even have entire teams dedicated to responsible AI applications. The more organizations who take up this challenge and invest in responsible AI, the better AI will be in the future.

The four cornerstones of responsible AI

Enterprises are implementing AI faster and faster to outpace competitors, protect revenue streams, and build higher-value relationships.

Learn more

Responsible AI: With great power comes greater accountability

There is a moral obligation to improve AI for the greater good, set higher standards, and eliminate problems like bias that have plagued AI technologies since their inception.

Learn more

Is your AI responsible & ethical?

Read our whitepaper to learn how to add transparency, empathy, accountability, and fairness for more responsible AI.

The Future of Responsible AI

Responsible AI requires both companies and individuals to think about the purpose of their AI. As Peter says:

“AI isn't something done to us. It's not magic – we have control to fix it. We don’t need to be overly optimistic, or overly dystopian – that isn't useful. We don't want to stop good applications, and at the same time, we don't want to turn our head and ignore bias.”

This exposes the need to understand that data is science and apply the same scientific process that allows us to control, test, and correct AI. But first we need to start with the question: what is the purpose of the AI? What are we solving for? Peter states that there are two things to think about – the first is the purpose and the second is the ability to do harm/manipulate. AI covers a broad spectrum of technologies – statistical models, business rules, etc. so it’s important to consider what ethical standards are needed.

At Pega, we’ve asked ourselves those questions and enabled our clients to do the same within our products via tools the help them use AI responsibly. These include:

  • Next Best Action Designer: Helps drive more empathetic customer engagement by encouraging a suitability policy and using customer data and context to determine the right action to take in the moment: service, sales, retention, or even no action at all.
  • T-Switch: Allows organizations to set the appropriate thresholds, per business function and purpose, for AI transparency. Businesses predefine these levels for each AI model using a sliding scale from one (most opaque) to five (most transparent). The transparency scores help guide users to build responsible AI systems using models that both meet their organization’s transparency requirements and deliver exceptional customer experiences.
  • Ethical Bias Check: Helps eliminate biases by simulating AI-driven customer engagement strategies before they go live and flagging the potential for discriminatory offers and messages.
  • A decision management environment that allows trackability and accountability for any number of algorithms, model versions, rules, and strategies.

It makes sense to be fearful and fascinated by AI, but with more and more companies understanding the importance of using AI responsibly and investing in it, the future seems a little less scary. AI will continue to help solve the most pressing business needs, empowering companies and customers alike.

Learn more about the importance of responsible AI, how to get started, and its impact on the world:

Tags

Área do produto: Plataforma

Sobre a autora

As Product Marketing Manager at Pega, Priyanka Raj, is known for building go-to-market strategy into new industries and leading employees successfully in merging organizations.

Compartilhar esta página Share via x Share via LinkedIn Copying...
Compartilhar esta página Share via x Share via LinkedIn Copying...