Prevent AI Discrimination Across All Customer Interactions
Despite your best intentions, bias related to factors like age, ethnicity, or gender can unintentionally creep into your analytics, and skew the outcomes. The results? Regulatory violations, discriminatory customer engagements, and a loss of public trust.
But with Ethical Bias Check, you can avoid AI adversity by proactively detecting bias in your next best action strategies, then adjusting the offending algorithm or business rule accordingly – ensuring a fair and more balanced outcome for everyone.
[Narrator] Biases related to factors such as age, ethnicity, gender and income can unintentionally creep in to your next best action decision strategies and skew the outcomes. The result? Harmful or discriminating practices such as fewer loans, fewer insurance policies, potential regulatory violations and loss of public trust. With Pega's ethical bias check, you can now act more ethically during every customer interaction. You can start by defining fields you want to be tested for bias and what the detection threshold should be. The threshold may be different per business area. For example, in credit risk, the threshold could be set to a lower value, detecting smaller amounts of bias. Whereas in marketing, the thresholds could be higher allowing more bias. In this example, the sales area allows a relatively large amount of bias. And we want to be warned if any action is above threshold. Thresholds can only be set by an authorized user. The ethical bias check can be done as part of simulation testing, a facility that already exists. Select your MBA strategy and a representative sample of customers, which will be available in the simulation environment. When the run is finished, there will be a message on screen indicating if there was any bias detected or not. When we open the bias report, we can see for which action a bias was detected, and for which bias field. In this case, the platinum credit card was sent to more male than female customers. And this is a significantly large shift from the original population distribution. In this case, the cause of the bias was the use of a field gender in one of the AI models. But it could also creep in through a correlated field or via a field used in the strategy. We hope this feature will be useful in testing your next best action strategies in a simulation environment before they are developed into production.
Recommended research & insights
See what’s new, what’s next, and what’s trending right now.
Uniquely powerful software isn’t the only thing that sets us apart.