Tools  |    |  September 6, 2018

PwC’s Responsible AI Toolkit

Toolkit and Responsible AI Framework developed by PwCt to help companies focus on and address five key dimensions when designing and deploying responsible AI applications:

  • Governance: Governance serves as an end-to-end foundation for all the other dimensions.
  • Ethics and regulation: The core goal is to help organisations develop AI that is not only compliant with applicable regulations, but is also ethical.
  • Interpretability and explainability: Provides an approach and utilities for AI-driven decisions to be interpretable and easily explainable by those who operate them and those who are affected by them.
  • Robustness and security: Helps organisations develop AI systems that provide robust performance and are safe to use by minimising the negative impact.
  • Bias and fairness: Addresses the issues of bias and fairness—recognising that while there is no such thing as a decision that is fair to all parties, it is possible for organisations to design AI systems to mitigate unwanted bias and achieve decisions that are fair under a specific and clearlycommunicated definition.

PwC’s Responsible AI Toolkit consists of a flexible and scalable suite of capabilities, covering frameworks and leading practices, assessments, technology, and people. The Responsible AI Toolkit is designed to enable and support the assessment and development of AI across an organisation, tailored to its unique business requirements and level of AI maturity.

The Toolkit enables organisations to build high-quality, transparent, explainable and ethical AI applications that generate trust and inspire confidence among employees, customers, business partners, and other stakeholders.