Beyond the Blackbox

Addressing AI’s ethics problem with explainability

Who are the different stakeholders when building an AI Application?

  • Developers

    xplAInr is a developer’s toolkit for ethical AI. The framework provides guidance for developers to incorporate Transparency by Design into the AI systems they build and operate, with the aim to make AI systems more safer and more trustworthy.

    Start here. Although there is no established consensus on the complete set of factors that makes an AI system explainable, this section guides developers to ask the right questions and identify gaps in their implementation.

  • Regulators and Policy Makers

    Technology tends to move faster than law. The xplAInr framework provides a toolkit for policy makers to progress legislation or regulatory mechanisms to catch up with AI development.

    The right policies could make AI safer and fairer for all. This need is particularly high for high-risk systems, which have potential to impact important areas such as health, liberty and justice. The xplAInr framework was designed with those systems in mind.

  • Users & Public

    (COMING SOON) The implications of AI affect us all. Transparency and explainability in AI systems tends to be obscure and inaccessible for non-terchnical people. But it is vital that any systems with the potential to influence high-stakes aspects of our lives are readily interrogable to all those who could be affected.

    We plan to expand the xplAInr framework to incorprate all potential stakeholders. This way, we aim to help give everyone the agency to control how AI affects their lives.

Why Standards Matter

Anyone creating, managing, assessing or even using an AI system can benefit from awareness of the many technical standards that are currently published or in development. We are mapping the world’s standards to the xplAInr life cycle framework, to help you discover what’s relevant to you. Explore the list of relevant AI Standards.

Why standards?

  • Standards are designed to maintain quality, safety and consistency across technical systems, while supporting innovation and business.

  • Conforming to standards is good for business, society and the planet, while failure to conform can lead to reputation harm, loss of business or even legal action.

  • There are many published standards relating to AI, with many more under development due for future publication, produced by the major Standards Development Organizations (SDOs) at global, regional and sector-specific levels. These include a new wave of ethical standards, which the xplAInr team are working on (among others!)

We have indexed many but are always keen to hear of more. Your feedback is much needed.

Responsible creators of smart technology are overwhelmed by the challenge of ensuring their software is transparent, compliant, trustworthy and secure. There is a well recognized lack of resources to help with this process, and existing content is hard to find.

We want to help, starting with explainability. Explore the xplAInr framework for more.