Our Methodology

Our methodology incorporates Human rights concepts with a hands-on data science approach.


Designed by Women at The Table and EPFL and delivered in collaboration with OHCHR, the workshop includes a Human Rights module and a Jupyter notebook field with code that connects how human rights interplay with decisions made at various points of the data and model lifecycle. This workshop is aimed at computer and data science students.


  • Explain a human rights-based approach to AI.
  • Identify relevance of different biases and importance of intersectionality, gender equality and bias to computer science and engineering / institutional objectives.
  • Apply how and when to use use tools and techniques to mitigate bias in AI.
  • Evaluate methods to integrate non-discrimination into design, planning and implementation of AI projects.

Workshop structure

The workshop consists of 2 parts:
Ia. Human Rights Module , and an Ib. Applied Research conversation,
II. <AI & Equality> Coding Toolbox.

Ia. Human Rights Module
Introducing basic human rights concepts and a human rights based approach to machine learning.
Ib. Applied Research conversation
Research Representatives (PhD students, postdoc, faculty) present their work on how human rights fit with AI
II. <AI & Equality> Coding Toolbox
Step-by-step case study, to see how to apply a human rights based approach in practice (debiasing data and algorithms)

Human Rights Module

Understand key concepts of international human rights law which are relevant to the designing of algorithms, with a focus on principles of equality and non discrimination.

Human Rights Principles

Human rights are universal, indivisible, inter-dependent and inter-related. They should be guaranteed to everyone everywhere in the world and are all related to each other. Human rights are codified by international and national law, and further elaborated by other standards. This allows normative approach and stronger accountability and provisions of remedies in case of human rights violations.

Human Rights Principles :

  • Equality and Non-discrimination All individuals are equal as human beings and by virtue of the inherent dignity of each human person.
  • Participation and Inclusion Every person and all peoples are entitled to active, free and meaningful participation in, contribution to, and enjoyment of civil, economic, social, cultural and political development in which human rights and fundamental freedoms can be realized.
  • Accountability and Rule of Law States and other duty-bearers are answerable for the observance of human rights.

Human Rights Based-approach

Ask key questions relevant to human rights principles at each stage of production!

  • Comply with human rights law.
  • Goals should contribute to the realisation of human rights.
  • Processes are guided by human rights principles.
  • Empower both right-holders and duty-bearers.

Practical Session

Experiment with data to see how different mathematical and data concepts of fairness interrelate. Begin a critical analysis checklist of the data process and apply some of the concepts and debiasing literature to hands-on exercise.

  • Introduction to fairness
    What is fairness? Fair to whom? Mathematical definitions of fairness and their limitations.
  • Build a Baseline model
    Why was the dataset created? Who created it? Who is in the data and who isn't?
    Build a simple model to see how it performs with different fairness metrics.
  • Pre-processing (Data)
    Where can we find bias in the data? What types of data biases exist? What can we mitigate them?
  • In-processing (Model)
    How bias can be introduced in the design decisions made when creating the algorithm?
  • Post-processing (Predictions)
    When we use the predictions, what assumptions are we making?

Introduction to fairness

Fairness is not a technical or statistical concept and there can never be a tool or software that can fully ‘de-bias’ your data or make your model ‘fair’.

Fairness is complex:

  • It is not technical, but an ethical concept.
  • It is contextual, as there is no one-size-fits-all approach.
  • There are no set answers and often cost and benefit decisions have to be made.
  • It is a process and there is no single fairness checkpoint.