The African AI & Equality Toolbox Webinar 5: Model Interpretation
In this stage we examine the opportunity to reflect on how power operates in AI: Who gets to say if it works? Who can question it? Who can stop it?
In this stage we examine the opportunity to reflect on how power operates in AI: Who gets to say if it works? Who can question it? Who can stop it?
We have partnered with the Chilean Centro Nacional de Inteligencia Artificial, CENIA, to co-construct a Latin American Spanish language version of the validated Toolbox, with use cases relevant to the regional experience. The partnership builds on the learnings from the workshop structure and outreach from the African Toolbox to do so.
In this final stage and webinar; we look at what true accountability means: planning for ongoing monitoring, shared governance, and the possibility of “no.” We will explore what it means for systems to be responsive—not just to data—but to dignity.
USAWA AI is an interactive, educational experience built around an AI avatar that draws on carefully mediated testimony from West African survivors of domestic servitude. Rather than recreating historical scenes or offering total explanations, the AI is designed to speak partially and cautiously, reflecting the ethical limits of testimony and the sensitivity of slavery.
How do people living in rural villages in Togo feel about the use of emerging technologies in humanitarian aid? This work reports on the privacy concerns of people living in rural Togo related to the use of machine learning models trained on phone data to allocate cash assistance to people living in poverty, highlighting an innovative method -- sociotechnical visuals -- to explain complex technical concepts so that people living in rural villages with limited literacy, formal education, and familiarity with digital tech could provide meaningful input.
The digital world suffers from a profound linguistic disparity, particularly in Africa where a lack of local language content and traditional, Global North-led language technology models fail to meet community needs, often resulting in data extraction and inequitable solutions. In an 18-month research project, in collaboration with the Distributed AI Research Institute (DAIR), we highlight a powerful alternative: a growing grassroots movement of community-based language technology initiatives across Africa that adopt a bottoms-up approach, prioritizing local needs and incorporating indigenous philosophies.