<AI & Equality>
A Human Rights Toolbox

How does a human rights based approach fit with AI and its creation? How can we achieve more fair machine learning models?

Engage directly with data and human rights concepts in order to understand the linkages and impacts of algorithmic creation that may better reflect human rights values.

Jupyter Notebook Terminology

A Human Rights approach to machine learning algorithms

Why and where can algorithms produce inequality outcomes? Why and where can algorithms be gender biased?
How can a human rights-based approach be applied to computer algorithms that engage, reason about, and make decisions on people?
Our methodology incorporates Human rights concepts with a hands-on data science approach.


A real life workshop taught jointly by human rights / legal experts with computer science /machine learning faculty using the <AI & Equality> Human Rights Toolbox

Stand-alone Jupyter notebook

A Jupyter notebook with code and exercises to apply in practice the concepts learned


An interdisciplinary community with in conversation with different sections, disciplines, and universities


Additional curated material for teaching and learning about the human rights and fairness in artificial intelligence.


Designed by OHCHR / EPFL, we provide the workshop that includes a Jupyter notebook and a Human Rights module. It connects how human rights interplay with decisions made at various points of the data and model lifecycle. Aimed at computer and data science students.


  • Explain a human rights-based approach to AI.
  • Identify relevance of different biases and importance of intersectionality, gender equality and bias to computer science and engineering / institutional objectives.
  • Apply how and when to use use tools and techniques to mitigate bias in AI.
  • Evaluate methods to integrate non-discrimination into design, planning and implementation of AI projects.

Workshop structure

The workshop consists of 2 parts: I. Human Rights Module , and an applied research conversation, II. applied coding toolbox.

Human Rights Module
Introducing basic human rights concepts and a human rights based approach to machine learning.
Applied Research
Research Representatives (PhD students, postdoc, faculty) present their work on how human rights fit with AI
Practical Toolbox
Step-by-step case study, to see how to apply a human rights based approach in practice (debiasing data and algorithms)

Stand-alone Jupiter notebook

Experiment with data to see how different mathematical and data concepts of fairness interrelate. Begin with a critical analysis checklist of the data process and apply some of the concepts and debiasing literature to hands-on exercise.

  • Introduction to fairness
    What is fairness? Fair to whom? Mathematical definitions of fairness and their limitations.
  • Build a Baseline model
    Why was the dataset created? Who created it? Who is in the data and who isn't?
    Build a simple model to see how it performs with different fairness metrics.
  • Pre-processing (Data)
    Where can we find bias in the data? What types of data biases exist? What can we mitigate them?
  • In-processing (Model)
    How bias can be introduced in the design decisions made when creating the algorithm?
  • Post-processing (Predictions)
    When we use the predictions, what assumptions are we making?

Get the Jupiter notebook


Human Rights module

Human rights and their principles. Equality and non-discrimination. A human rights-based approach. Legal resources.

Jupyter notebook

A Jupyter notebook with code and exercises to apply in practice the concepts learned

The social impact of development choices: Tinder usecase

The impact of inequalities produced by development choices through a practical case study; the dating app Tinder


Dictionary for the terms.

Contact Us

Please leave us a message using the form below and we’ll get back to you as soon as possible.

Email Us


Your message has been sent. Thank you!