The African AI & Equality Toolbox Webinar 5: Model Interpretation
In this stage we examine the opportunity to reflect on how power operates in AI: Who gets to say if it works? Who can question it? Who can stop it?
In this stage we examine the opportunity to reflect on how power operates in AI: Who gets to say if it works? Who can question it? Who can stop it?
We have partnered with the Chilean Centro Nacional de Inteligencia Artificial, CENIA, to co-construct a Latin American Spanish language version of the validated Toolbox, with use cases relevant to the regional experience. The partnership builds on the learnings from the workshop structure and outreach from the African Toolbox to do so.
In this final stage and webinar; we look at what true accountability means: planning for ongoing monitoring, shared governance, and the possibility of “no.” We will explore what it means for systems to be responsive—not just to data—but to dignity.
Led by Savannah Thais, the AI & Equality human Rights LLMs Benchmark is part of our core commitment: to build AI systems that respect dignity, uphold equality, and serve everyone. In this Pub-Talk, Savannah will present the research outcomes of the project.
Evento de lanzamiento del curso gratuito, online y asincrónico “Inteligencia Artificial y Derechos Humanos”, una iniciativa de Æquitas – Women at the Table y CENIA. El evento se realizará el: 15 de […]
USAWA AI is an interactive, educational experience built around an AI avatar that draws on carefully mediated testimony from West African survivors of domestic servitude. Rather than recreating historical scenes or offering total explanations, the AI is designed to speak partially and cautiously, reflecting the ethical limits of testimony and the sensitivity of slavery.
SafeHer: A Reporting Tool for Technology-Facilitated Gender-Based Violence in Kenya. SafeHer is a reporting tool developed by SafeOnline Women Kenya (SOW-Kenya) that enables women and girls in Kenya to report incidents of technology-facilitated gender-based violence.
Adopting a “calling in” rather than “calling out” stance toward AI practitioners, the paper offers actionable guidance on how intersectionality can be substantively incorporated into technical work, thereby recentring social science theory within the field of algorithmic fairness.
How do people living in rural villages in Togo feel about the use of emerging technologies in humanitarian aid? This work reports on the privacy concerns of people living in rural Togo related to the use of machine learning models trained on phone data to allocate cash assistance to people living in poverty, highlighting an innovative method -- sociotechnical visuals -- to explain complex technical concepts so that people living in rural villages with limited literacy, formal education, and familiarity with digital tech could provide meaningful input.
This Open Studio shares an in-progress Feminist AI–informed learning design framework that examines how power, authorship, and agency shift when Generative AI is introduced into secondary education. Drawing on classroom pilots, design research, and assessment tools such as a Student-Author Voice rubric, the work explores how LLMs can be integrated through co-creation and gradual immersion, rather than extraction or automation. The session foregrounds methodological tensions, ethical trade-offs, and policy-relevant questions emerging from real educational contexts.