xplAInr is a new toolkit that aims to provide a single place through which makers of AI can get all the resources they need to evaluate and explain the ethics behind their products, and publish their work in a way that third parties can understand and assess. xplAInr is currently an early prototype. We are gathering resources, processes and considerations to add to a series of stages that comprise the full life cycle of an AI-based system, from pre-planning to deployment and on to decommission. We are seeking people with experience in this area (tech development, procurement, governance, security, etc.) to suggest additions and changes.

We are also seeking support to develop the concept further, developing the prototype into an interactive toolkit. We would love to discuss how we can work with you to do this. The lack of explainability in AI systems is the root of many challenges. Our traditional software development process does not work because of the state-spaces and stochasticity of these systems.

Why xplainr?

  • While there is more and more focus on ‘XAI’ or ‘explainable AI’, our system operates on the principle that ‘explainability’ is much more than a technological process. We provide the “how” more than a descriptive notion of “what”; that is, our framework allows a common framework for stakeholders to quantify their own explainability benchmarks, instead of describing what those benchmarks should be.

What xplAInr.ai provides

  • xplAInr.ai seeks to create a consolidated resource to map the common stages of tech builds. By focusing on 22 elements of ML-based products, services and tools, our System Life-Cycle ontology creates an interoperable framework for multiple stakeholders. 

How to use xplAInr.ai?

  • xplAInr.ai can be seen as both an ontology and a taxonomy that maps various build phases across the life-cycle of systems development. 

  • The creation of a Creative Commons licensed explainability toolkit provides a resource for SDOs to access existing literature, tools, common methodologies, definitions, standards, common language, etc. in support of policy makers and standards-writing activity and broader standards development strategy. While note all the resources listed on xplAInr.ai are themselves free access, open source or non-commercial-licensed, our content lists provide an overview of material that may be interesting and/or applicable to system designers.

Our Toolkit

Our Team

This System Life Cycle framework and Toolkit was lead in tandem between Mathana, Karen Bennet and Ben Bland working with many IEEE Work groups members to provide feedback and direction. We would like to thank all those who have contributed and look forward to continued support.

xplAInr.ai has its roots in ‘affective-state computing’. Our three project founders have worked on issues related to emotion recognition and detection AI, and are co-authors of the soon-to-be-released IEEE P7014 - Emulated Empathy In Autonomous And Intelligent Systems Working Group, a standard for ethical considerations in emulated empathy in autonomous and intelligent systems.

Read more about how work is informed by the IEEE’s technical standards development process:

  1. 5 Issues at the Heart of Empathic AI 

  2. IEEE BeyondStandards

  3. IEEE P7014™ is Creating a Standard for the Ethics of Systems that Interact with Your Emotions

  4. LinuxFoundation sBOM for Ai Applicationss

Funding

This project has been funded in part through an IEEE funding program (IEEE TAB CoS Discretionary Fund). and an EU Commission fund (EU StandiCT) We would like to thank both group for providing this seed funding to kick start our work.

Contact Us

We can be reached at:

We welcome anyone to join us