Responsible AI requires a T-Switch
AI can deliver huge benefits. Without control, however, it can cause regulatory issues that could lead to unnecessary risk, public relations nightmares, and huge liability.
Pega believes, in using responsible AI , giving you controls that allow you to manage the deployment of your AI. We call it the Pega T-Switch™, meaning you can switch on or off - based on transparency levels - your use of AI.
Engage confidently with Pega
No other vendor in the market today gives you this level of AI control. Pega empowers you to confidently employ AI models when it makes business sense. When it doesn’t, we help you flag, catch and prevent the use of opaque AI so you avoid ethical and regulatory issues.
The Pega Customer Decision Hub application, with its AI Studio, enables users to responsibly and safely deploy AI algorithms based on transparency thresholds within their business. These T-Switch settings help companies:
- Mitigate potential risks
- Maintain regulatory compliance
- Responsibly provide differentiated experiences to their customers
Why transparency matters today
Businesses deploy AI to gain better insights into customer needs and provide more personalized marketing, sales, and service. However, not all AI models have the level of transparency needed to fully understand its predictions and resulting actions. While some opaque AI algorithms may drive powerful performance, the complex logic can’t be fully explained – a tradeoff that becomes more problematic when the model’s application causes unintended actions.
Further raising the stakes, the General Data Protection Regulation (GDPR) mandates that businesses must be able to explain the logic behind AI models using European customer data to make decisions. Otherwise, you risk massive fines: up to 4 percent of global revenues for non-compliance.
With the Pega T-Switch™, you’re in control
As part of the AI-powered Pega Customer Decision Hub, the T-Switch allows organizations to set the appropriate thresholds for AI transparency or opaqueness. Businesses predefine these levels for each AI model using a sliding scale from one (most opaque) to five (most transparent). The transparency scores help guide users to build responsible AI systems using models that both meet their organization’s transparency requirements and deliver exceptional customer experiences.
As a business user, you control the transparency of their AI based on the models you deploy to drive a desired outcome. For example, it’s low-risk to use an opaque deep learning model that classifies marketing images. Conversely, banks under strict regulations for fair lending practices require highly transparent AI models to demonstrate a fair distribution of loan offers.
Consumers and regulators are increasingly demanding higher degrees of trust and transparency. The brands that will win are the ones that view responsible AI as a badge of honor and deploy the necessary solutions to earn that badge.
Frequently Asked Questions
A responsible approach to AI embodies four critical elements: empathy, transparency, fairness, and accountability. Best practices for responsible use include ensuring AI-driven decisions are interpretable and transparent to those who are affected by them. These can only be achieved through a construct that requires a consistent approach to data, governance, and model usage across your organization.
Capabilities, policies, and methodologies to ensure that AI systems and decisions are fair, transparent, explainable, robust, and aligned with human values.
Yes. Pega’s Ethical Bias Check ensures there’s no unintentional bias hiding in your next-best-action strategies – whether in your models or your business logic.
Users simply define fields with potential for bias – like age, ethnicity, gender, or income – then simulate the strategies that use them to ensure they’re not skewed unfairly toward or away from specific groups. Pega Ethical Bias Check lets you screen your entire engagement strategy at once, across channels – reducing time, effort, and errors.
High-risk AI has been defined by a coalition of developed nations to include any artificial intelligence that:
- Provides real-time remote biometric identification in publicly accessible spaces by law enforcement.
- Exploits vulnerabilities of any group of people due to their age, physical, or mental disability.
- Enables governments to use general-purpose social credit scoring.
- Uses subliminal techniques to manipulate a person’s behavior in a manner that may cause psychological or physical harm.