<AI & Equality>
A Human Rights Toolbox

How does a human rights based approach fit with AI and its creation? How can we achieve more fair machine learning models?

Engage directly with data and human rights concepts in order to understand the linkages and impacts of algorithmic creation that may better reflect human rights values.

A Human Rights approach to machine learning algorithms

Why and where can algorithms produce inequality outcomes? Why and where can algorithms be gender biased?
How can a human rights-based approach be applied to computer algorithms that engage, reason about, and make decisions on people?
Our methodology incorporates Human rights concepts with a hands-on data science approach.

Workshop

Designed in collaboration with OHCHR and EPFL, the workshop includes a Human Rights module and a Jupyter notebook field with code that connects how human rights interplay with decisions made at various points of the data and model lifecycle. This workshop is aimed at computer and data science students.

Objectives:

  • Explain a human rights-based approach to AI.
  • Identify relevance of different biases and importance of intersectionality, gender equality and bias to computer science and engineering / institutional objectives.
  • Apply how and when to use use tools and techniques to mitigate bias in AI.
  • Evaluate methods to integrate non-discrimination into design, planning and implementation of AI projects.

Workshop structure

The workshop consists of 2 parts:
I. Human Rights Module , and an applied research conversation,
II. applied coding toolbox.

Human Rights Module
Introducing basic human rights concepts and a human rights based approach to machine learning.
Applied Research
Research Representatives (PhD students, postdoc, faculty) present their research and current work on how human rights fit with AI
Practical Toolbox
Step-by-step case study, to see how to apply a human rights based approach in practice (debiasing data and algorithms)

Stand-alone Jupyter notebook

Experiment with data to see how different mathematical and data concepts of fairness interrelate. Begin with a critical analysis checklist of the data process and apply some of the concepts and debiasing literature to hands-on exercise.

  • Introduction to fairness
    What is fairness? Fair to whom? Mathematical definitions of fairness and their limitations.
  • Build a Baseline model
    Why was the dataset created? Who created it? Who is in the data and who isn't?
    Build a simple model to see how it performs with different fairness metrics.
  • Pre-processing (Data)
    Where can we find bias in the data? What types of data biases exist? What can we mitigate them?
  • In-processing (Model)
    How bias can be introduced in the design decisions made when creating the algorithm?
  • Post-processing (Predictions)
    When we use the predictions, what assumptions are we making?

Access and play with the Jupyter notebook

Resources

Human Rights module

Human rights and their principles. Equality and non-discrimination. A human rights-based approach. Legal resources.

Jupyter notebook

A Jupyter notebook with code and exercises to apply in practice the concepts learned

The social impact of development choices: Tinder usecase

The impact of inequalities produced by development choices through a practical case study; the dating app Tinder

Terminology

Dictionary for the terms.

Interdisciplinary Community

An interdisciplinary community with in conversation with different sections, disciplines, and universities

  • Open to all disciplines
    From computer and data scientists, to humanities, social scientists, law students
  • Open to all regions
    Engage and participate in discussion with students from different regions and universities
  • Open to all levels

Contact Us

Contact us as we build our interdisciplinary community!
Please leave us a message using the form below and we’ll get back to you as soon as possible.

Email

sofia.kypraiou[at]epfl.com
caitlin[at]womenatthetable.net

Loading
Your message has been sent. Thank you!