<AI & Equality>
A Human Rights Toolbox

How does a human rights based approach fit with AI and its creation? How can we achieve more fair machine learning models?

<AI & Equality> proves a basic human rights workshop blended with tools to directly and immediately see how human rights principles can be applied and thought about analytically in code.

Sign up

Problem

Real world inequalities are reproduced within algorithms and flow back into the real world.

Machine learning algorithms are being widely used in many applications, and they have a direct impact on our lives. When designed and engineered to serve humans, they contribute to significant advances in the economy and the public good. However, these algorithms, as they are created and designed by humans, are prone to bias.

The new generation of researchers tasked with creating new algorithmic systems have solid technical backgrounds but lack substantial human rights knowledge or frameworks to use this technical knowledge as AI for Social Good. From the university, new machine learning engineers and data scientists are being taught that data is unique, it’s true and objective.

Universities have a critical role in meeting these ethical concerns. Integrating human values into computing technology can educate the next generation of scientists and engineers to work for the public good, creating technology in line with human values rather than harming them.

Why Human Rights?

Human rights are rights we have because we exist as human beings. These universal rights are inherent to us all, regardless of nationality, sex, national or ethnic origin or any other status.

Fairness is a sociocultural concept, derived from ethics and political philosophy which refers to plural conceptions of justice between individuals.
Human rights on the other hand:

  • are often better defined and measurable
  • most are defined under international or national law.
  • provide an ethical lens that exceeds national and cultural borders
  • converts voluntary promises of ethical behaviour into compulsory requirements for compliance with established legislation.
  • put people in the centre of decision-making and can assess and address any unintentional harm

<AI & Equality> methodology

Our methodology includes a workshop consisting of a Human Rights module and code, outreach and community plan incorporating human rights concepts with data science.

  • Goal
    Bring an international university generation to understand the scientist’s unique potential of social impact in the real world, bridging science and human rights policy to foster systemic resilience and more equal, just, robust democracies.
  • Team
    A joint work between EPFL (École Polytechnique Fédérale de Lausanne), Women at the table and in collaboration with the Office of the United Nations High Commissioner for Human Rights
  • Audience
    Computer/data science students and early career data scientists

More information about the Workshop

Contact Us

Contact us as we build our interdisciplinary community!
Please leave us a message using the form below and we’ll get back to you as soon as possible.

Email

sofia[at]womenatthetable.net

Loading
Your message has been sent. Thank you!