BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//AI and Equality - ECPv6.13.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:AI and Equality
X-ORIGINAL-URL:https://aiequalitytoolbox.com
X-WR-CALDESC:Events for AI and Equality
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Zurich
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/Santiago
BEGIN:STANDARD
TZOFFSETFROM:-0300
TZOFFSETTO:-0400
TZNAME:-04
DTSTART:20250406T030000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0400
TZOFFSETTO:-0300
TZNAME:-03
DTSTART:20250907T040000
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Zurich:20251029T140000
DTEND;TZID=Europe/Zurich:20251029T150000
DTSTAMP:20260404T110757
CREATED:20250813T075518Z
LAST-MODIFIED:20250911T091102Z
UID:10000006-1761746400-1761750000@aiequalitytoolbox.com
SUMMARY:The African AI & Equality Toolbox Webinar 4: Model Selection
DESCRIPTION:Imported or generalized models often underperform—especially when they are trained on data that does not reflect local language\, environment\, or lived experience. For AI to be trustworthy\, accuracy alone is not enough. In this webinar we dive into what inclusion and efficiency means ensuring the development or building of systems that don’t require technical expertise to interpret—ensuring that trust\, oversight\, and agency are accessible to all users. Whether a rural health worker\, a student\, or a community organizer\, each person should be able to understand what a system is doing and why.\n \nThe African AI & Equality Toolbox is a strategic initiative designed to empower African stakeholders—policymakers\, technologists\, civil society actors\, and communities—to shape Artificial Intelligence (AI) systems that are contextually relevant\, inclusive\, and grounded in human rights. \nDeveloped by Women at the Table and the African Centre for Technology Studies (ACTS)\, and adapted from the global AI & Equality Human Rights Toolbox Initiative in collaboration with the UN Office of the High Commissioner for Human Rights (OHCHR)\, this African iteration provides practical tools and methodologies to guide equitable AI development across the continent. \nThe Toolbox applies a Human Rights-based AI Lifecycle Framework\, integrating reflective questions and the Human Rights Impact Assessment (HRIA) developed with the Alan Turing Institute. It emphasizes participatory\, multidisciplinary approaches and is rooted in feminist\, decolonial\, and Justice\, Equity\, Diversity\, and Inclusion (JEDI) principles and incorporates lessons from emerging digital rights challenges\, ensuring AI systems are designed with safety and dignity at their core. \n1PM GMT | 3PM SAST | 4PM EAT \nRegister here
URL:https://aiequalitytoolbox.com/event/the-african-ai-equality-toolbox-webinar-4-model-selection/
ATTACH;FMTTYPE=image/jpeg:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/AI-EQ-Toolbox-black-w-gradient-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Zurich:20251029T170000
DTEND;TZID=Europe/Zurich:20251029T180000
DTSTAMP:20260404T110757
CREATED:20250816T104755Z
LAST-MODIFIED:20250816T104755Z
UID:10000019-1761757200-1761760800@aiequalitytoolbox.com
SUMMARY:AI & Equality Community Monthly Meetups
DESCRIPTION:As our AI & Equality community keeps growing\, monthly meetups are a space to stay connected\, share our work\, and explore the latest in AI\, especially at the intersection of social justice and human rights\, while welcoming broader conversations. \nWhether you’re working on something you’d like to share\, exploring new ideas\, or simply want to listen in and connect\, you’re warmly welcome. \nCome meet fellow community members\, and help co-create this space. \nRegister to join our community on Circle.  \nExplore the AI & Equality Community events calendar 
URL:https://aiequalitytoolbox.com/event/ai-equality-community-monthly-meetups-2/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/AI-Equality.-Monthly-Community-Meetups.-Circle.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Zurich:20251112T140000
DTEND;TZID=Europe/Zurich:20251112T150000
DTSTAMP:20260404T110757
CREATED:20250813T075642Z
LAST-MODIFIED:20250828T090603Z
UID:10000008-1762956000-1762959600@aiequalitytoolbox.com
SUMMARY:The African AI & Equality Toolbox Webinar 5: Model Interpretation
DESCRIPTION:In African deployments\, there is often pressure to launch rapidly\, without thorough contextual testing. But skipping this step is where trust breaks down—and harm begins. Testing must happen with communities\, not just on them. In this stage we examine the opportunity to reflect on how power operates in AI:  Who gets to say if it works?  Who can question it? Who can stop it? \nThe African AI & Equality Toolbox is a strategic initiative designed to empower African stakeholders—policymakers\, technologists\, civil society actors\, and communities—to shape Artificial Intelligence (AI) systems that are contextually relevant\, inclusive\, and grounded in human rights. \nDeveloped by Women at the Table and the African Centre for Technology Studies (ACTS)\, and adapted from the global AI & Equality Human Rights Toolbox Initiative in collaboration with the UN Office of the High Commissioner for Human Rights (OHCHR)\, this African iteration provides practical tools and methodologies to guide equitable AI development across the continent. \nThe Toolbox applies a Human Rights-based AI Lifecycle Framework\, integrating reflective questions and the Human Rights Impact Assessment (HRIA) developed with the Alan Turing Institute. It emphasizes participatory\, multidisciplinary approaches and is rooted in feminist\, decolonial\, and Justice\, Equity\, Diversity\, and Inclusion (JEDI) principles and incorporates lessons from emerging digital rights challenges\, ensuring AI systems are designed with safety and dignity at their core. \n1PM GMT | 3PM SAST | 4PM EAT \nRegister here  \n 
URL:https://aiequalitytoolbox.com/event/the-african-ai-equality-toolbox-webinar-5-model-interpretation/
ATTACH;FMTTYPE=image/jpeg:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/AI-EQ-Toolbox-black-w-gradient-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Zurich:20251119T150000
DTEND;TZID=Europe/Zurich:20251119T160000
DTSTAMP:20260404T110757
CREATED:20250816T100802Z
LAST-MODIFIED:20251113T111911Z
UID:10000014-1763564400-1763568000@aiequalitytoolbox.com
SUMMARY:The Latin American AI & Equality Toolbox Launch
DESCRIPTION:We have partnered with the Chilean Centro Nacional de Inteligencia Artificial\, CENIA\,  to co-construct a Latin American Spanish language version of the validated <AI & Equality> Toolbox\, with use cases relevant to the regional experience. The partnership builds on the learnings from the workshop structure and outreach from the African <AI & Equality> Toolbox.\n\n\nTranslating and adapting tools like the <AI & Equality> Human Rights Toolbox can facilitate the co-creation of a common vocabulary and common understanding on the specific needs and challenges of each region of the world\, unlocking informed debates about their visions for data with purpose and collaboratively finding examples of regional use cases of AI. \n\nRegister here 
URL:https://aiequalitytoolbox.com/event/the-latin-american-ai-equality-toolbox-launch/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/AI-EQ-LAC-logo.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Zurich:20251126T140000
DTEND;TZID=Europe/Zurich:20251126T150000
DTSTAMP:20260404T110757
CREATED:20250813T075838Z
LAST-MODIFIED:20251112T132139Z
UID:10000009-1764165600-1764169200@aiequalitytoolbox.com
SUMMARY:The African AI & Equality Toolbox Webinar 6: Deployment
DESCRIPTION:In African contexts\, post-deployment oversight is often underfunded or overlooked. Once a system is launched—especially by international actors—it can become invisible\, even as its consequences grow. In this final stage and webinar; we look at what true accountability means: planning for ongoing monitoring\, shared governance\, and the possibility of “no.” We will explore what it means for systems to be responsive—not just to data—but to dignity. \nThe African AI & Equality Toolbox is a strategic initiative designed to empower African stakeholders—policymakers\, technologists\, civil society actors\, and communities—to shape Artificial Intelligence (AI) systems that are contextually relevant\, inclusive\, and grounded in human rights. \nDeveloped by Women at the Table and the African Centre for Technology Studies (ACTS)\, and adapted from the global AI & Equality Human Rights Toolbox Initiative in collaboration with the UN Office of the High Commissioner for Human Rights (OHCHR)\, this African iteration provides practical tools and methodologies to guide equitable AI development across the continent. \nThe Toolbox applies a Human Rights-based AI Lifecycle Framework\, integrating reflective questions and the Human Rights Impact Assessment (HRIA) developed with the Alan Turing Institute. It emphasizes participatory\, multidisciplinary approaches and is rooted in feminist\, decolonial\, and Justice\, Equity\, Diversity\, and Inclusion (JEDI) principles and incorporates lessons from emerging digital rights challenges\, ensuring AI systems are designed with safety and dignity at their core. \n1PM GMT | 3PM SAST | 4PM EAT \nRegister here
URL:https://aiequalitytoolbox.com/event/the-african-ai-equality-toolbox-webinar-6-deployment/
ATTACH;FMTTYPE=image/jpeg:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/AI-EQ-Toolbox-black-w-gradient-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20251204T170000
DTEND;TZID=Europe/Paris:20251204T180000
DTSTAMP:20260404T110757
CREATED:20251109T212928Z
LAST-MODIFIED:20251109T213035Z
UID:10000021-1764867600-1764871200@aiequalitytoolbox.com
SUMMARY:AI & Equality Pub-Talk | Human Rights Benchmark for LLMs: Research Outcomes | Savannah Thais
DESCRIPTION:We are advancing the Human Rights Benchmark for Large Language Models (LLMs)—a research initiative that examines how these systems align with core human rights principles. In this November Pub-Talk\, Savannah Thais will present the outcomes of this work\, sharing insights from the benchmarking process and highlighting what they reveal about the human rights implications of LLMs. This project reflects our commitment to building AI that respects dignity\, advances equality\, and serves all communities. \n\nWhy now? As LLMs are increasingly embedded in decision-making\, chatbots\, and public services\, it is vital to move beyond accuracy toward accountability. Our research explores whether these models treat identities fairly\, respond consistently to rights-based questions\, and avoid harmful omissions or bias.\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/ai-equality-pub-talk-human-rights-benchmark-for-llms-research-outcomes-savannah-thais/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2025/08/Savannah-Thais.-LLMS-Benchmark-.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Santiago:20251215T120000
DTEND;TZID=America/Santiago:20251215T140000
DTSTAMP:20260404T110757
CREATED:20251203T092853Z
LAST-MODIFIED:20251203T092853Z
UID:10000022-1765800000-1765807200@aiequalitytoolbox.com
SUMMARY:Lanzamiento Curso Inteligencia Artificial y Derechos Humanos
DESCRIPTION:Evento de lanzamiento del curso gratuito\, online y asincrónico “Inteligencia Artificial y Derechos Humanos”\, una iniciativa de Æquitas – Women at the Table y CENIA. \n \nEl evento se realizará el: \n\n15 de diciembre de 2025\n 12:00 a 14:00 hrs\nCENIA\, Av. Vicuña Mackenna 4860\, Macul\, Edificio de Innovación\, 2do piso\n\nDurante el encuentro presentaremos los contenidos del curso y su relevancia en el contexto actual. Además\, te esperamos con un coffee break de bienvenida para compartir y conversar. \nPara más información e inscripciones al curso\, puedes visitar: uabierta.uchile.cl
URL:https://aiequalitytoolbox.com/event/lanzamiento-curso-inteligencia-artificial-y-derechos-humanos/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2025/12/15dic-evento-images.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260219T150000
DTEND;TZID=Europe/London:20260219T160000
DTSTAMP:20260404T110757
CREATED:20260218T101917Z
LAST-MODIFIED:20260218T105012Z
UID:10000023-1771513200-1771516800@aiequalitytoolbox.com
SUMMARY:USAWA AI: Building Ethical AI for Historical Justice with Marie Rodet | AI & Equality Open Studio
DESCRIPTION:  \n\nOpen Studio | USAWA AI is an interactive\, educational experience built around an AI avatar that draws on carefully mediated testimony from West African survivors of domestic servitude. Rather than recreating historical scenes or offering total explanations\, the AI is designed to speak partially and cautiously\, reflecting the ethical limits of testimony and the sensitivity of slavery. USAWA AI demonstrates how AI can be designed to protect vulnerable voices rather than extract from them\, embedding care and restraint into the technology itself. By foregrounding partial testimony and ethical limits\, it challenges dominant AI models that thrive for ‘total knowledge’ and ‘neutrality’\, showing instead how AI can support social justice and equitable representation. This work offers a concrete example of AI as an infrastructure for dignity and inclusion.\nExplore it: https://jiwegamestorage.blob.core.windows.net/game-files/Usawa%20AI/V4/index.html \nAbout the speaker: \nMarie Rodet is Reader in the History of Africa at SOAS University of London. Her work explores public history\, gamification and digital methods as ways of translating historical research into interactive formats that enable ethical engagement with complex and difficult histories\, and support equitable\, collaborative forms of innovation. She is the narrative and research lead on three educational mobile games—Usawa\, Usawa AI and Umoja—all developed in partnership with Jiwe Studios. These projects transform historical research into interactive experiences that foreground equity and ethical engagement\, and exemplify her broader commitment to research-led innovation and inclusive research culture. \n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/usawa-ai-building-ethical-ai-for-historical-justice-with-marie-rodet/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260305T150000
DTEND;TZID=Europe/London:20260305T160000
DTSTAMP:20260404T110757
CREATED:20260218T102325Z
LAST-MODIFIED:20260218T105000Z
UID:10000024-1772722800-1772726400@aiequalitytoolbox.com
SUMMARY:SafeHer: A Reporting Tool for Technology-Facilitated Gender-Based Violence in Kenya with Lilian Olivia Orero | AI & Equality Open Studio
DESCRIPTION:Open Studio | SafeHer: A Reporting Tool for Technology-Facilitated Gender-Based Violence in Kenya. SafeHer is a reporting tool developed by SafeOnline Women Kenya (SOW-Kenya) that enables women and girls in Kenya to report incidents of technology-facilitated gender-based violence. The tool completed user testing with 36 participants and recently won the National Models for Women’s Safety Online (NMWSO) Safety by Design Award from IREX and the Gates Foundation. With the Google Play Store listing in progress and national scaling planned for 2026\, this is an ideal moment to receive community feedback on the reporting design\, methodology and implementation framework. Explore: https://safeherkenya.org/ \nAbout the speaker:\nLilian Olivia Orero is the Founder of SafeOnline Women Kenya (SOW-Kenya)\, where she leads the development of SafeHer\, a mobile app enabling women and girls in Kenya to report technology-facilitated gender-based violence. She holds an LLM with Distinction in Law\, Innovation & Technology from the University of Bristol\, where her dissertation examined how dark patterns on social media platforms enable gendered cyberbullying under EU Digital Services Act regulation. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/safeher-a-reporting-tool-for-technology-facilitated-gender-based-violence-in-kenya-with-lilian-olivia-orero/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260311T163000
DTEND;TZID=America/New_York:20260311T180000
DTSTAMP:20260404T110757
CREATED:20260303T133922Z
LAST-MODIFIED:20260303T134024Z
UID:10000034-1773246600-1773252000@aiequalitytoolbox.com
SUMMARY:CSW70 | When Algorithms Discriminate: Gender Bias in Justice Systems
DESCRIPTION:Wednesday\, 11 March 2026 \n4:30 – 6:00 PM ET \nNGO CSW  \n10th Floor\, Church Center of the United Nations \n777 United Nations Plaza\, New York \n  \nAn In Depth Discussion: What happens when courts replace judges with computer algorithms? We are told these systems are “objective” and “fair”  but the evidence tells a different story. From bail decisions to sentencing\, algorithms are making life-changing choices about women based on biased data and male-centered assumptions. A woman seeking justice after assault may find her credibility automatically questioned. A mother may be flagged as “high risk” simply because of where she lives or her employment history. Meanwhile\, these same systems treat men’s violence as more predictable and less dangerous. \nThis is not science fiction\,  it is happening right now in courts worldwide. Join us to uncover how technology is creating new barriers to justice for women and girls\, and what policy solutions can effectively address it. \n  \nLaura Nyirinkindi | UN Special Procedures Member\, Working Group on discrimination against women and girls\nAfrica Regional Vice President of the International Federation of Women Lawyers (Federación Internacional de Abogadas) \nFernanda K. Martins | Fundacion multitudes\, Director of Strategy and Advocacy \nCaitlin Kraft–Buchman | Women At The Table; CEO \n 
URL:https://aiequalitytoolbox.com/event/csw70-when-algorithms-discriminate-gender-bias-in-justice-systems/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/03/Designing-AI-for-Human-Agency-29.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260326T150000
DTEND;TZID=Europe/London:20260326T160000
DTSTAMP:20260404T110757
CREATED:20260218T102603Z
LAST-MODIFIED:20260218T104946Z
UID:10000025-1774537200-1774540800@aiequalitytoolbox.com
SUMMARY:Beyond the Math: Why AI Fairness Needs a Feminist Lens with Marie Mirsch | AI & Equality Pub-Talk
DESCRIPTION:🔗 Access paper: https://link.springer.com/article/10.1007/s43681-025-00926-y \nAlthough research on algorithmic fairness is inherently interdisciplinary\, many proposed fairness approaches remain predominantly technical in their treatment of societal concepts such as fairness and justice. While these approaches often claim to operationalize insights from the social sciences\, they frequently do so in ways that appropriate rather than meaningfully engage with the underlying theories. This paper critiques this practice through the lens of intersectionality. Adopting a “calling in” rather than “calling out” stance toward AI practitioners\, it offers actionable guidance on how intersectionality can be substantively incorporated into technical work\, thereby recentring social science theory within the field of algorithmic fairness. \nAbout the speaker:\nMarie Mirsch\, M.Sc.\, is a research assistant and doctoral candidate at RWTH Aachen University. She conducts research at the intersection of mathematics\, ethics\, and social sciences\, intending to anchor diversity perspectives in technology. Her interdisciplinary research focuses on intersectionality in the context of algorithmic fairness – a central topic of current AI research. She also considers aspects of procedural fairness and participatory approaches\, such as including citizens’ perspectives. As part of her work at the bridging professorship “Gender and Diversity in Engineering”\, she also deals with ethical issues relating to using artificial intelligence in engineering. A Research Fellowship from the BMBF-funded AI Campus supports her research. \nAs project manager of the Responsible Research and Innovation (RRI) Hub at RWTH Aachen University\, she coordinates and implements national and international projects to strengthen social responsibility in technology development as part of the “ENHANCE – European Universities of Technology Alliance”. \nHer teaching activities include the seminar “Responsible AI for Engineers” at RWTH\, the online course “Responsible Innovators for Tomorrow” as part of the ENHANCE alliance\, and the workshop “Responsible AI” at the University of Koblenz. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/beyond-the-math-why-ai-fairness-needs-a-feminist-lens-with-marie-mirsch/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260409T150000
DTEND;TZID=Europe/London:20260409T160000
DTSTAMP:20260404T110757
CREATED:20260218T102957Z
LAST-MODIFIED:20260218T104930Z
UID:10000026-1775746800-1775750400@aiequalitytoolbox.com
SUMMARY:What People in Rural Villages in Togo Can Teach Us About ML/AI and Privacy with Zoe Kahn | AI & Equality Pub-Talk
DESCRIPTION:🔗 Access paper: https://dl.acm.org/doi/abs/10.1145/3710968 \nHow do people living in rural villages in Togo feel about the use of emerging technologies in humanitarian aid? This work reports on the privacy concerns of people living in rural Togo related to the use of machine learning models trained on phone data to allocate cash assistance to people living in poverty\, highlighting an innovative method — sociotechnical visuals — to explain complex technical concepts so that people living in rural villages with limited literacy\, formal education\, and familiarity with digital tech could provide meaningful input. \nAbout the speaker:\nZoe Kahn is a postdoctoral research at the Research Centre for Trustworthy Data Science and Security. She received her PhD in Information Science from UC Berkeley and her B.A. in Sociology from New York University. Her research lies at the intersection of computer science\, law\, and society. She explores how people can meaningfully participate in the design of sociotechnical systems — especially those shaping public life\, governance\, and digital rights. Dr. Kahn has collaborated with diverse communities across Africa and the United States\, including extended fieldwork in rural Togo. \nHer work combines qualitative research with creative methods such as storytelling and sociotechnical visuals to make complex technical systems more understandable\, opening up dialogue with people with varying levels of literacy\, formal education\, and familiarity with digital tech. Beyond academia\, she has worked at a civil rights law firm\, tech startup\, and Microsoft–where she contributed to responsible AI tooling and governance frameworks. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/ai-equality-pub-talk-what-people-in-rural-villages-in-togo-can-teach-us-about-ml-ai-and-privacy-with-zoe-kahn/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/10.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260416T150000
DTEND;TZID=Europe/London:20260416T160000
DTSTAMP:20260404T110757
CREATED:20260218T103240Z
LAST-MODIFIED:20260218T104912Z
UID:10000027-1776351600-1776355200@aiequalitytoolbox.com
SUMMARY:Beyond Automation: Protecting Student Voice in the Age of AI with Daire Maria Ni Uanachain | AI & Equality Open Studio
DESCRIPTION:This Open Studio shares an in-progress Feminist AI–informed learning design framework that examines how power\, authorship\, and agency shift when Generative AI is introduced into secondary education. Drawing on classroom pilots\, design research\, and assessment tools such as a Student-Author Voice rubric\, the work explores how LLMs can be integrated through co-creation and gradual immersion\, rather than extraction or automation. The session foregrounds methodological tensions\, ethical trade-offs\, and policy-relevant questions emerging from real educational contexts. \n🔗 More: Finding student-author voice: a rubric for assessment in the AI era. \nEducational AI systems increasingly shape how knowledge is produced\, evaluated\, and legitimized—often without addressing whose voices are amplified or erased. This work matters because it applies Feminist AI principles to learning design\, offering concrete strategies to safeguard student authorship\, agency\, and equity while aligning with emerging AI governance frameworks. It contributes practice-based evidence to policy discussions on responsible AI use in education beyond compliance checklists. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/ai-equality-open-studio-beyond-automation-protecting-student-voice-in-the-age-of-ai-with-daire-maria-ni-uanachain/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260423T150000
DTEND;TZID=Europe/London:20260423T160000
DTSTAMP:20260404T110757
CREATED:20260218T103636Z
LAST-MODIFIED:20260218T103716Z
UID:10000028-1776956400-1776960000@aiequalitytoolbox.com
SUMMARY:Public Interest AI: Procurement as Democratic Infrastructure with Emma Kallina | AI & Equality Open Studio
DESCRIPTION:Public sector AI has the potential to harm citizens\, with risks increasing as its use expands. Recent work positions public procurement as a way to shape public sector AI in line with public interests\, using the state’s purchasing power to influence which AI systems are procured and under what conditions. \nIn this presentation\, I explore how this potential could be realised in practice by drawing on semi-structured interviews with UK/EU buyers\, providers\, and procurement experts. More specifically\, I highlight six promising procurement practices that enable the public sector to shape AI in line with public interests\, alongside concrete mechanisms to support their uptake. These practices as well as derived interventions provide directions for both research and practice on how public procurement can be used as a governance mechanism for better aligning AI with public interests. \nThere will be time for questions and discussions – a lot of ongoing research to talk about 🙂 \nEmma Kallina is a PostDoc Researcher at the Compliant & Accountable Systems Group\, operating across the University of Cambridge and the Research Center Trustworthy Data Science and Security. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/public-interest-ai-procurement-as-democratic-infrastructure-with-emma-kallina-ai-equality-open-studio/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/9.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260522
DTEND;VALUE=DATE:20260523
DTSTAMP:20260404T110757
CREATED:20260218T110002Z
LAST-MODIFIED:20260326T130923Z
UID:10000033-1779408000-1779494399@aiequalitytoolbox.com
SUMMARY:AI & Equality Festival of Ideas
DESCRIPTION:Where the frontlines meet the code.\n\n\n\n\n\n\n\n\nAlgorithms are deciding who gets bail\, who gets a loan\, whose language gets spoken by machines\, whose body gets flagged at the border\, and whose labor gets automated.\n\n\n\n\nThe AI & Equality Festival of Ideas will convene the people working with algorithms; linguists building language models for African languages\, feminist scholars rewriting the benchmarks\, digital rights lawyers fighting surveillance states\, health researchers exposing algorithmic bias in clinical care\, and organizers connecting the dots between AI\, labor\, climate\, and indigenous land rights. \nOn May 22\, leading organisations will be sharing out loud\, across disciplines\, about what they are finding\, what they are fighting to create\, and what it will actually take to get there. \n\n\n\n\n\n\nStay tuned for more updates!  Visit the Festival Page!  \n\n\n\nOn the Agenda: A full day of 90-minute sessions hosted by leading organizations working at the frontlines of AI and society from around the world.\n\n\n\n\nSessions running across all time zones\, from South Asia to the Pacific Coast. Wherever you are\, there’s a session for you covering the questions that matter most: \n\nWhat are visions of society that work for all\, and how do we build it?\nWhose knowledge gets encoded — and whose gets erased?\nWhat would a rights-based\, feminist\, decolonial\, AI actually look like?\nHow do we get from research to real change?\n\n\n\nFull programme drops April 15. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRegister on our Circle community >> where the festival will be happening! 
URL:https://aiequalitytoolbox.com/event/ai-equality-festival-of-ideas/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/AIEQ-Festival-of-ideas-02-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260604T150000
DTEND;TZID=Europe/London:20260604T160000
DTSTAMP:20260404T110757
CREATED:20260218T103914Z
LAST-MODIFIED:20260218T104550Z
UID:10000029-1780585200-1780588800@aiequalitytoolbox.com
SUMMARY:Funding AI for Good: A Call for Meaningful Engagement with Hongjin Lin | AI & Equality Pub-Talk
DESCRIPTION:🔗 Access paper: https://arxiv.org/abs/2509.12455 \nArtificial Intelligence for Social Good (AI4SG) is a growing area that explores AI’s potential to address social issues\, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities\, and projects frequently face inadequate community engagement and sustainability challenges. While existing literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes\, much less attention has been given to the funding agenda and rhetoric that influences downstream approaches. \nThrough a thematic analysis of 35 funding documents — representing about $410 million USD in total investments\, we reveal dissonances between AI4SG’s stated intentions for positive social impact and the techno-centric approaches that some funding agendas promoted\, while also identifying funding documents that scaffolded community-collaborative approaches for applicants. Drawing on our findings\, we offer recommendations for funders to embed approaches that balance both contextual understanding and technical capacities in future funding call designs. We further discuss how the HCI community can positively shape AI4SG funding design processes. \nSpeaker: Hongjin Lin \nHongjin is a Ph.D. Candidate in Computer Science at Harvard University\, advised by Professor Krzysztof Gajos. Her research lies at the intersection of AI and social impact\, through both qualitative evaluation and technology development. Her work reveals power imbalances in AI for social good partnerships\, where community organizations’ goals are often sidelined\, and funding agendas play a prevalent role in determining project priorities and approaches. Drawing on community-collaborative design approaches\, she is currently working on projects that support participation in local collective climate actions. \n\nHongjin was born and raised in Guangzhou\, China\, and moved to the US for her undergraduate degree in Mathematics and Computer Science at Occidental College. She received her master’s degree in Data Science from the London School of Economics. Before Harvard\, she worked as a research fellow at Stanford Law School\, developing and evaluating Machine Learning systems for environmental policy enforcement in partnership with the EPA. Outside of research\, she has completed projects with nonprofits in China\, the US\, and Malawi\, and worked as a data for development intern at UNDP in New York and Costa Rica. She is a dedicated yogi\, dancer\, community-living member\, and a happy camper whenever she is away from her computer.\nMore: https://sites.google.com/g.harvard.edu/hongjinlin\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/funding-ai-for-good-a-call-for-meaningful-engagement-with-hongjin-lin-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/7.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260618T150000
DTEND;TZID=Europe/London:20260618T160000
DTSTAMP:20260404T110757
CREATED:20260218T104221Z
LAST-MODIFIED:20260218T104221Z
UID:10000030-1781794800-1781798400@aiequalitytoolbox.com
SUMMARY:Hidden Inequality: Why We Need to Talk About AI in the Global South with Renata Frade | AI & Equality Open Studio
DESCRIPTION:Artificial Intelligence systems increasingly shape social participation\, labor\, visibility and access to rights. However\, the social impacts of AI are uneven\, often reinforcing existing inequalities — particularly for women and communities in the Global South. \nThis 30-minute talk is grounded in two complementary research trajectories:\n(1) extensive digital ethnography with 247 women-led technology communities\, focused on platforms\, communication\, participation and power (not AI-specific)\, and (2) parallel academic studies on AI\, inequality and the Global South\, examining how automated systems interact with structural asymmetries\, governance gaps and cultural contexts. \nRather than presenting a technical or proprietary AI framework\, the talk offers critical insights and reflective lenses on how AI becomes socially invisible\, how inequality is reproduced through discourse and design\, and why communication is central to a human-rights-based approach to AI. \nKey Themes:\n\nWhat Platform Research Reveals About Power and Visibility\nInsights from digital ethnography with 247 women-led tech communities\nPatterns of participation\, exclusion and symbolic inclusion in digital platforms\nWhy these dynamics matter when AI is introduced into social and institutional systems\nAI and Inequality from a Global South Perspective\nFindings from research on AI\, gender and structural inequality in the Global South\nAsymmetries in data\, labor\, governance and technological dependency\nThe risks of applying “universal” AI solutions without contextual grounding\nCommunication as a Human Rights Issue in AI Systems\nHow language\, interfaces and narratives shape accountability\nWhy transparency alone is insufficient without accessibility and participation\nThe role of critical literacy in rights-based AI approaches\n\n\n🔗 Explore publication: Frade\, R.\, Wajcman\, J. (2023). “Feminism and Technology: an interview with Dr. Judy Wajcman by Renata Frade”. In Technofeminism: multi and transdisciplinary contemporary views of women in technology.  https://doi.org/10.48528/0wyd-p294 \nAbout the speaker:\nI am an interdisciplinary feminist researcher with deep experience in advancing feminist\, decolonial\, and participatory ethics in technology\, particularly across Latin American and Lusophone contexts. My doctoral research mapped and analyzed 247 communities of women in technology in Brazil and Portugal\, applying participatory and justice-oriented methods designed to center marginalized voices—often those of adolescents and youth in vulnerable contexts. Through projects such as Fiocruz Hack Girls and the LitGirlsBr platform\, I have engaged directly with adolescent participants\, developing empowering digital literacy and inclusion programs.\n\nI have acted as co-editor and lead organizer for several transnational research outputs\, including the collective volume “Technofeminism” and global solidarity events like the WeColloquium at ISEG\, Lisbon. My work always seeks to foreground community voices\, relational ethics\, and participatory decision-making\, including direct experience with reviewing and designing ethical frameworks for HCI and responsible AI\, both within and outside formal IRB structures. Noted for mapping and empowering women-in-tech communities\, producing influential feminist research\, and engaging in ethics\, justice\, and participatory action.\n\nExperienced in editorial leadership\, transmedia projects\, and collective knowledge-making in both academic and NGO settings (such as Girls in Tech Brazil).\n\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/hidden-inequality-why-we-need-to-talk-about-ai-in-the-global-south-with-renata-frade-ai-equality-open-studio/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/8.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260625T150000
DTEND;TZID=Europe/London:20260625T160000
DTSTAMP:20260404T110757
CREATED:20260218T104436Z
LAST-MODIFIED:20260218T104436Z
UID:10000031-1782399600-1782403200@aiequalitytoolbox.com
SUMMARY:Testing AI Safety: Why Current Guardrails Fail to Stop Social Bias with Anna-Maria Gueorgiueva | AI & Equality Pub-Talk
DESCRIPTION:Access paper: https://arxiv.org/abs/2512.19238 \nHow do large language models understand the lived experiences of stigmatized groups\, and when does this understanding differ from the human perspective? Can this lead to bias\, and if so\, do our existing safety tools help mitigate such bias? This work investigated open-source language models for bias against 93 stigmatized groups\, identifying that specific types of biases (especially those deemed by humans to be ‘threatening’ such as having HIV or a criminal record) experience significantly more bias than other types of stigmatized identities. To attempt to remedy this\, we test guardrail models\, models from leading technology companies that are meant to identify discriminatory or bias-eliciting inputs and mitigate harmful outputs. This talk will report on our findings\, identifying where existing guardrail models fail and discussing technical and legal solutions. \nAbout the speaker:\nAnna-Maria Gueorguieva is a PhD student at the University of Washington Information School and holds B.A. in Data Science and Legal Studies from UC Berkeley. Her research focuses on AI evaluations for social and political impacts and AI regulation. Her work lies at the intersection of empirical methods to investigate AI usage and behavior in combination with the necessary AI regulations needed to limit and remedy harm.\n\n\n\n\n\n\n\n\n\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/testing-ai-safety-why-current-guardrails-fail-to-stop-social-bias-with-anna-maria-gueorgiueva-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/6.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260630T150000
DTEND;TZID=Europe/London:20260630T160000
DTSTAMP:20260404T110757
CREATED:20260218T104809Z
LAST-MODIFIED:20260218T104836Z
UID:10000032-1782831600-1782835200@aiequalitytoolbox.com
SUMMARY:Moving Beyond Big Tech: Blueprints for Community-Led Language AI with Claudia Pozo | AI & Equality Pub-Talk
DESCRIPTION:Paper: TBD (available in June) \nThe digital world suffers from a profound linguistic disparity\, particularly in Africa where a lack of local language content and traditional\, Global North-led language technology models fail to meet community needs\, often resulting in data extraction and inequitable solutions. In an 18-month research project\, in collaboration with the Distributed AI Research Institute (DAIR)\, we highlight a powerful alternative: a growing grassroots movement of community-based language technology initiatives across Africa that adopt a bottoms-up approach\, prioritizing local needs and incorporating indigenous philosophies. This approach centers technology as an act of collective creation and community survival\, yet it faces significant challenges\, including a heavy reliance on Global North funding that can conflict with goals for self-determination and critical concerns around data governance and ownership in regions with underdeveloped legal frameworks. Ultimately\, the research advocates for a fundamental shift in technological practice to support these community-centered development models\, providing blueprints for the Global Majority to decolonize AI. \nAbout the speaker:\nClaudia Pozo is Language Justice Co-Lead at Whose Knowledge? She’s a South American brown feminist\, multifaceted activist\, researcher\, social scientist and strategist\, whose work is grounded in knowledge and language justice. She holds an MPhil in Development Studies and a BA in Communications. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/moving-beyond-big-tech-blueprints-for-community-led-language-ai-with-claudia-pozo-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/5.png
END:VEVENT
END:VCALENDAR