Workshops

The <AI & Equality> Workshop was designed by Women at The Table and EPFL, in consultation with OHCHR. It includes a Human Rights module as well as a Jupyter Notebook filled with code that illustrates how a human rights interplay with decisions made at different points of the data and model lifecycle. 

Objectives:

→ Explain a human rights-based approach to ArtificiaI Intelligence.

→ Identify relevance of different biases and importance of intersectionality, gender equality and bias to computer science, engineering and policy makers.

→ Apply how and when to use tools and techniques to mitigate bias in AI.

→ Evaluate methods to integrate non-discrimination into design, planning and implementation of AI projects by applying a critical analysis framework.

What you'll learn

The workshop consists of 2 parts and is given in 2.5 – 3 hour sessions.

Part 1. Human Rights-Based Approach

1A. Human Rights Module

The Human Rights Module introduces basic human rights concepts and a human rights-based approach to machine learning  focused on three core principles.

Using a Human Rights-Based approach we investigate questions to be explored at each phase of the data life cycle

1B. Applied Research conversation at the university or organization hosting the Workshop

Research Representatives (faculty, PhD student, postdoc, researcher) present  work on how their applied research and human rights intersect with AI.

Part 2. The < AI & Equality > Coding Toolbox

A case study with a focus on debiasing data and algorithms is used to apply a human rights-based approach in practice.

We introduce the idea that fairness is not a technical or statistical concept and there can never be a tool or software that can fully ‘de-bias’ your data or make your model ‘fair’. 

In addition to the workshop, we offer post-workshop follow up sessions with #Coding Office Hours and #Jupyter Notebook Q&A support via our Circle community

Walking through the data science lifecycle, we investigate fairness at every step:

Introduction to fairness: What is fairness? What are mathematical and social definitions of fairness, and what are their limitations?

Building a Baseline model: Why was the dataset created? Who created it? Who is in the data and who isn’t? Build a simple model to see how it performs with different fairness metrics.

Pre-processing (Data): Where can we find bias in the data? What types of data biases exist? What can we do to mitigate them?

In-processing (Model): How can bias be introduced inadvertently in design decisions when creating the algorithm?

Post-processing (Predictions): When we use the predictions, what assumptions are we making?

Join our community

We are committed to advancing human rights-based approaches in AI & encourage anyone interested in learning more from a global perspective to explore and contribute to our community!