AI and Equality

Event Series AI & Equality Community | Events

USAWA AI: Building Ethical AI for Historical Justice with Marie Rodet | AI & Equality Open Studio

USAWA AI is an interactive, educational experience built around an AI avatar that draws on carefully mediated testimony from West African survivors of domestic servitude. Rather than recreating historical scenes or offering total explanations, the AI is designed to speak partially and cautiously, reflecting the ethical limits of testimony and the sensitivity of slavery.

What People in Rural Villages in Togo Can Teach Us About ML/AI and Privacy with Zoe Kahn | AI & Equality Pub-Talk

How do people living in rural villages in Togo feel about the use of emerging technologies in humanitarian aid? This work reports on the privacy concerns of people living in rural Togo related to the use of machine learning models trained on phone data to allocate cash assistance to people living in poverty, highlighting an innovative method -- sociotechnical visuals -- to explain complex technical concepts so that people living in rural villages with limited literacy, formal education, and familiarity with digital tech could provide meaningful input.

Beyond Automation: Protecting Student Voice in the Age of AI with Daire Maria Ni Uanachain | AI & Equality Open Studio

This Open Studio shares an in-progress Feminist AI–informed learning design framework that examines how power, authorship, and agency shift when Generative AI is introduced into secondary education. Drawing on classroom pilots, design research, and assessment tools such as a Student-Author Voice rubric, the work explores how LLMs can be integrated through co-creation and gradual immersion, rather than extraction or automation. The session foregrounds methodological tensions, ethical trade-offs, and policy-relevant questions emerging from real educational contexts.

Public Interest AI: Procurement as Democratic Infrastructure with Emma Kallina | AI & Equality Open Studio

Public sector AI has the potential to harm citizens, with risks increasing as its use expands. Recent work positions public procurement as a way to shape public sector AI in line with public interests, using the state’s purchasing power to influence which AI systems are procured and under what conditions. In this presentation, I explore how this potential could be realised in practice by drawing on semi-structured interviews with UK/EU buyers, providers, and procurement experts.

Funding AI for Good: A Call for Meaningful Engagement with Hongjin Lin | AI & Equality Pub-Talk

Artificial Intelligence for Social Good (AI4SG) is a growing area that explores AI's potential to address social issues, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities, and projects frequently face inadequate community engagement and sustainability challenges. While existing literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes, much less attention has been given to the funding agenda and rhetoric that influences downstream approaches.

Hidden Inequality: Why We Need to Talk About AI in the Global South with Renata Frade | AI & Equality Open Studio

This 30-minute talk is grounded in two complementary research trajectories:
(1) extensive digital ethnography with 247 women-led technology communities, focused on platforms, communication, participation and power (not AI-specific), and (2) parallel academic studies on AI, inequality and the Global South, examining how automated systems interact with structural asymmetries, governance gaps and cultural contexts.

Testing AI Safety: Why Current Guardrails Fail to Stop Social Bias with Anna-Maria Gueorgiueva | AI & Equality Pub-Talk

How do large language models understand the lived experiences of stigmatized groups, and when does this understanding differ from the human perspective? Can this lead to bias, and if so, do our existing safety tools help mitigate such bias? This work investigated open-source language models for bias against 93 stigmatized groups, identifying that specific types of biases (especially those deemed by humans to be 'threatening' such as having HIV or a criminal record) experience significantly more bias than other types of stigmatized identities.