AI and Equality

Testing AI Safety: Why Current Guardrails Fail to Stop Social Bias with Anna-Maria Gueorgiueva | AI & Equality Pub-Talk

How do large language models understand the lived experiences of stigmatized groups, and when does this understanding differ from the human perspective? Can this lead to bias, and if so, do our existing safety tools help mitigate such bias? This work investigated open-source language models for bias against 93 stigmatized groups, identifying that specific types of biases (especially those deemed by humans to be 'threatening' such as having HIV or a criminal record) experience significantly more bias than other types of stigmatized identities.

Moving Beyond Big Tech: Blueprints for Community-Led Language AI with Claudia Pozo | AI & Equality Pub-Talk

The digital world suffers from a profound linguistic disparity, particularly in Africa where a lack of local language content and traditional, Global North-led language technology models fail to meet community needs, often resulting in data extraction and inequitable solutions. In an 18-month research project, in collaboration with the Distributed AI Research Institute (DAIR), we highlight a powerful alternative: a growing grassroots movement of community-based language technology initiatives across Africa that adopt a bottoms-up approach, prioritizing local needs and incorporating indigenous philosophies.