Use AI responsibly
Mitigate risks and promote trust
What is responsible AI?
Responsible AI means developing or using artificial intelligence in a way that is ethical, transparent, fair, and accountable – to ensure it’s both safe for society and consistent with human values.
How does responsible AI work?
Responsible AI systems are fair, transparent, empathetic, and robust. For AI to be considered responsible, its decision-making process needs to be explainable, hardened to real-world exposure, and behave in a way that aligns to human norms.
What are the core principles of responsible AI?
Artificial intelligence must be unbiased and balanced for all groups.
AI-powered decisions must be explainable to a human audience.
Empathy means that the AI adheres to social norms and isn’t used in way that’s unethical.
AI should be hardened to the real world and exposed to a variety of training data, scenarios, inputs, and conditions.
Accountability in AI is driven by organizational culture. Everyone across departments and functional areas must hold themselves and their AI to a high standard.
Frequently Asked Questions about responsible AI
While the terms "ethical AI" and "responsible AI" are related and often used interchangeably, they can have slightly different connotations. In general, both concepts aim to address the ethical considerations surrounding the development and deployment of artificial intelligence, but they focus on different aspects.
While ethical AI primarily concentrates on moral principles and values, responsible AI extends its focus to a broader set of considerations, emphasizing the need for a comprehensive and holistic approach to address the challenges and opportunities associated with AI technologies.
Identifying and reducing AI bias, especially when it's not obvious, requires a combination of careful design, continuous monitoring, and proactive measures. Pega Ethical Bias Check is a great tool that can help you identify fields with bias potential, simulate and test strategies, generate warnings, and validate and resolve biases.