AI and Equality

Methodology

This is a methodology developed by Women at the Table in collaboration with OHCHR and EPFL

Our methodology aims at using a Human Rights-Based Approach to investigate questions to be explored at each phase of the AI Lifecycle, focusing on fairness and debiasing AI. 

Human Rights-Based Approach

Introduces basic human rights concepts and a human rights-based approach to machine learning focused on three core principles.

The AI Lifecycle

Using an essential questions framework we investigate questions to be explored at each phase of the data life cycle.

It’s important to note that ethics are situational, as well as crucially important.

There are +80 guidelines and recommendations on ethics and responsibility from universities, civil society organizations, research institutes, companies, governments and more. Regardless of their strength, the number and variety of guidelines create an à  la carte application of ethical and responsibility principles, diminishing the ethics conversation. At the same time, many aspects of these guidelines are very abstract and have been criticized for not being directly actionable, both in academia as well as industry.

Human rights standards are built on an agreed body of international and national law, providing a common starting point among different actors, and making the conversation and objectives more concrete. As a result, we have chosen to center this course around a Human Rights-Based Approach to AI development.

The ultimate focus of Human Rights, guiding  impact on humans towards human flourishing, is applicable to AI systems and their development, in other words:

“At its best, the digital revolution will empower, inform, connect, and save lives. At its worst it will disempower, misinform, disconnect, and cost lives. Human Rights will make all the difference to that equation.”

Michelle Bachelet, Former UN High Commissioner for Human Rights, 2019.

Engaging in a critical analysis throughout and after AI development will help creators consider best practices from the onset, including improved engagement with affected communities. A Human Rights-based approach is likely to result in more sustainable solutions that have a positive impact on the communities it is designed to serve.

This approach is anchored in reflective questions along the AI lifecycle using the process to move from the identification of a societal problem to the technical solution that addresses that problem. In the Toolbox we distinguish six key stages of the life cycle:

  1. Objective + Team Composition
  2. Defining System Requirements
  3. Data Discovery
  4. Selecting and Developing a Model
  5. Testing and Interpreting Outcome
  6. Deployment & Post-Deployment Monitoring

 

This life cycle is not strictly linear, but rather interwoven and cyclical, encouraging a return to previous steps as the model evolves. We use the metaphor of a thread looping back repeatedly to emphasize the importance of reflecting, revisiting, and refining as we learn more about a system’s socio-technical context.

Following the stages of the AI life cycle can be a useful scaffolding to ask reflective questions at each stage, stressing that it is essential to integrate Human Rights considerations throughout, instead of as an add-on after the system has been developed.

This systematic approach to the AI life cycle, grounded in a Human Rights-based perspective, ensures that the development process remains focused on creating equitable and beneficial outcomes for all stakeholders involved.

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.