BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//AI and Equality - ECPv6.13.2.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:AI and Equality
X-ORIGINAL-URL:https://aiequalitytoolbox.com
X-WR-CALDESC:Events for AI and Equality
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:20261025T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260416T150000
DTEND;TZID=Europe/London:20260416T160000
DTSTAMP:20260411T154756
CREATED:20260218T103240Z
LAST-MODIFIED:20260218T104912Z
UID:10000027-1776351600-1776355200@aiequalitytoolbox.com
SUMMARY:Beyond Automation: Protecting Student Voice in the Age of AI with Daire Maria Ni Uanachain | AI & Equality Open Studio
DESCRIPTION:This Open Studio shares an in-progress Feminist AI–informed learning design framework that examines how power\, authorship\, and agency shift when Generative AI is introduced into secondary education. Drawing on classroom pilots\, design research\, and assessment tools such as a Student-Author Voice rubric\, the work explores how LLMs can be integrated through co-creation and gradual immersion\, rather than extraction or automation. The session foregrounds methodological tensions\, ethical trade-offs\, and policy-relevant questions emerging from real educational contexts. \n🔗 More: Finding student-author voice: a rubric for assessment in the AI era. \nEducational AI systems increasingly shape how knowledge is produced\, evaluated\, and legitimized—often without addressing whose voices are amplified or erased. This work matters because it applies Feminist AI principles to learning design\, offering concrete strategies to safeguard student authorship\, agency\, and equity while aligning with emerging AI governance frameworks. It contributes practice-based evidence to policy discussions on responsible AI use in education beyond compliance checklists. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/ai-equality-open-studio-beyond-automation-protecting-student-voice-in-the-age-of-ai-with-daire-maria-ni-uanachain/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260423T150000
DTEND;TZID=Europe/London:20260423T160000
DTSTAMP:20260411T154756
CREATED:20260218T103636Z
LAST-MODIFIED:20260218T103716Z
UID:10000028-1776956400-1776960000@aiequalitytoolbox.com
SUMMARY:Public Interest AI: Procurement as Democratic Infrastructure with Emma Kallina | AI & Equality Open Studio
DESCRIPTION:Public sector AI has the potential to harm citizens\, with risks increasing as its use expands. Recent work positions public procurement as a way to shape public sector AI in line with public interests\, using the state’s purchasing power to influence which AI systems are procured and under what conditions. \nIn this presentation\, I explore how this potential could be realised in practice by drawing on semi-structured interviews with UK/EU buyers\, providers\, and procurement experts. More specifically\, I highlight six promising procurement practices that enable the public sector to shape AI in line with public interests\, alongside concrete mechanisms to support their uptake. These practices as well as derived interventions provide directions for both research and practice on how public procurement can be used as a governance mechanism for better aligning AI with public interests. \nThere will be time for questions and discussions – a lot of ongoing research to talk about 🙂 \nEmma Kallina is a PostDoc Researcher at the Compliant & Accountable Systems Group\, operating across the University of Cambridge and the Research Center Trustworthy Data Science and Security. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/public-interest-ai-procurement-as-democratic-infrastructure-with-emma-kallina-ai-equality-open-studio/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/9.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260522
DTEND;VALUE=DATE:20260523
DTSTAMP:20260411T154756
CREATED:20260218T110002Z
LAST-MODIFIED:20260326T130923Z
UID:10000033-1779408000-1779494399@aiequalitytoolbox.com
SUMMARY:AI & Equality Festival of Ideas
DESCRIPTION:Where the frontlines meet the code.\n\n\n\n\n\n\n\n\nAlgorithms are deciding who gets bail\, who gets a loan\, whose language gets spoken by machines\, whose body gets flagged at the border\, and whose labor gets automated.\n\n\n\n\nThe AI & Equality Festival of Ideas will convene the people working with algorithms; linguists building language models for African languages\, feminist scholars rewriting the benchmarks\, digital rights lawyers fighting surveillance states\, health researchers exposing algorithmic bias in clinical care\, and organizers connecting the dots between AI\, labor\, climate\, and indigenous land rights. \nOn May 22\, leading organisations will be sharing out loud\, across disciplines\, about what they are finding\, what they are fighting to create\, and what it will actually take to get there. \n\n\n\n\n\n\nStay tuned for more updates!  Visit the Festival Page!  \n\n\n\nOn the Agenda: A full day of 90-minute sessions hosted by leading organizations working at the frontlines of AI and society from around the world.\n\n\n\n\nSessions running across all time zones\, from South Asia to the Pacific Coast. Wherever you are\, there’s a session for you covering the questions that matter most: \n\nWhat are visions of society that work for all\, and how do we build it?\nWhose knowledge gets encoded — and whose gets erased?\nWhat would a rights-based\, feminist\, decolonial\, AI actually look like?\nHow do we get from research to real change?\n\n\n\nFull programme drops April 15. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nRegister on our Circle community >> where the festival will be happening! 
URL:https://aiequalitytoolbox.com/event/ai-equality-festival-of-ideas/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/AIEQ-Festival-of-ideas-02-1-scaled.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260604T150000
DTEND;TZID=Europe/London:20260604T160000
DTSTAMP:20260411T154756
CREATED:20260218T103914Z
LAST-MODIFIED:20260218T104550Z
UID:10000029-1780585200-1780588800@aiequalitytoolbox.com
SUMMARY:Funding AI for Good: A Call for Meaningful Engagement with Hongjin Lin | AI & Equality Pub-Talk
DESCRIPTION:🔗 Access paper: https://arxiv.org/abs/2509.12455 \nArtificial Intelligence for Social Good (AI4SG) is a growing area that explores AI’s potential to address social issues\, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities\, and projects frequently face inadequate community engagement and sustainability challenges. While existing literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes\, much less attention has been given to the funding agenda and rhetoric that influences downstream approaches. \nThrough a thematic analysis of 35 funding documents — representing about $410 million USD in total investments\, we reveal dissonances between AI4SG’s stated intentions for positive social impact and the techno-centric approaches that some funding agendas promoted\, while also identifying funding documents that scaffolded community-collaborative approaches for applicants. Drawing on our findings\, we offer recommendations for funders to embed approaches that balance both contextual understanding and technical capacities in future funding call designs. We further discuss how the HCI community can positively shape AI4SG funding design processes. \nSpeaker: Hongjin Lin \nHongjin is a Ph.D. Candidate in Computer Science at Harvard University\, advised by Professor Krzysztof Gajos. Her research lies at the intersection of AI and social impact\, through both qualitative evaluation and technology development. Her work reveals power imbalances in AI for social good partnerships\, where community organizations’ goals are often sidelined\, and funding agendas play a prevalent role in determining project priorities and approaches. Drawing on community-collaborative design approaches\, she is currently working on projects that support participation in local collective climate actions. \n\nHongjin was born and raised in Guangzhou\, China\, and moved to the US for her undergraduate degree in Mathematics and Computer Science at Occidental College. She received her master’s degree in Data Science from the London School of Economics. Before Harvard\, she worked as a research fellow at Stanford Law School\, developing and evaluating Machine Learning systems for environmental policy enforcement in partnership with the EPA. Outside of research\, she has completed projects with nonprofits in China\, the US\, and Malawi\, and worked as a data for development intern at UNDP in New York and Costa Rica. She is a dedicated yogi\, dancer\, community-living member\, and a happy camper whenever she is away from her computer.\nMore: https://sites.google.com/g.harvard.edu/hongjinlin\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/funding-ai-for-good-a-call-for-meaningful-engagement-with-hongjin-lin-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/7.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260618T150000
DTEND;TZID=Europe/London:20260618T160000
DTSTAMP:20260411T154756
CREATED:20260218T104221Z
LAST-MODIFIED:20260218T104221Z
UID:10000030-1781794800-1781798400@aiequalitytoolbox.com
SUMMARY:Hidden Inequality: Why We Need to Talk About AI in the Global South with Renata Frade | AI & Equality Open Studio
DESCRIPTION:Artificial Intelligence systems increasingly shape social participation\, labor\, visibility and access to rights. However\, the social impacts of AI are uneven\, often reinforcing existing inequalities — particularly for women and communities in the Global South. \nThis 30-minute talk is grounded in two complementary research trajectories:\n(1) extensive digital ethnography with 247 women-led technology communities\, focused on platforms\, communication\, participation and power (not AI-specific)\, and (2) parallel academic studies on AI\, inequality and the Global South\, examining how automated systems interact with structural asymmetries\, governance gaps and cultural contexts. \nRather than presenting a technical or proprietary AI framework\, the talk offers critical insights and reflective lenses on how AI becomes socially invisible\, how inequality is reproduced through discourse and design\, and why communication is central to a human-rights-based approach to AI. \nKey Themes:\n\nWhat Platform Research Reveals About Power and Visibility\nInsights from digital ethnography with 247 women-led tech communities\nPatterns of participation\, exclusion and symbolic inclusion in digital platforms\nWhy these dynamics matter when AI is introduced into social and institutional systems\nAI and Inequality from a Global South Perspective\nFindings from research on AI\, gender and structural inequality in the Global South\nAsymmetries in data\, labor\, governance and technological dependency\nThe risks of applying “universal” AI solutions without contextual grounding\nCommunication as a Human Rights Issue in AI Systems\nHow language\, interfaces and narratives shape accountability\nWhy transparency alone is insufficient without accessibility and participation\nThe role of critical literacy in rights-based AI approaches\n\n\n🔗 Explore publication: Frade\, R.\, Wajcman\, J. (2023). “Feminism and Technology: an interview with Dr. Judy Wajcman by Renata Frade”. In Technofeminism: multi and transdisciplinary contemporary views of women in technology.  https://doi.org/10.48528/0wyd-p294 \nAbout the speaker:\nI am an interdisciplinary feminist researcher with deep experience in advancing feminist\, decolonial\, and participatory ethics in technology\, particularly across Latin American and Lusophone contexts. My doctoral research mapped and analyzed 247 communities of women in technology in Brazil and Portugal\, applying participatory and justice-oriented methods designed to center marginalized voices—often those of adolescents and youth in vulnerable contexts. Through projects such as Fiocruz Hack Girls and the LitGirlsBr platform\, I have engaged directly with adolescent participants\, developing empowering digital literacy and inclusion programs.\n\nI have acted as co-editor and lead organizer for several transnational research outputs\, including the collective volume “Technofeminism” and global solidarity events like the WeColloquium at ISEG\, Lisbon. My work always seeks to foreground community voices\, relational ethics\, and participatory decision-making\, including direct experience with reviewing and designing ethical frameworks for HCI and responsible AI\, both within and outside formal IRB structures. Noted for mapping and empowering women-in-tech communities\, producing influential feminist research\, and engaging in ethics\, justice\, and participatory action.\n\nExperienced in editorial leadership\, transmedia projects\, and collective knowledge-making in both academic and NGO settings (such as Girls in Tech Brazil).\n\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/hidden-inequality-why-we-need-to-talk-about-ai-in-the-global-south-with-renata-frade-ai-equality-open-studio/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/8.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260625T150000
DTEND;TZID=Europe/London:20260625T160000
DTSTAMP:20260411T154756
CREATED:20260218T104436Z
LAST-MODIFIED:20260218T104436Z
UID:10000031-1782399600-1782403200@aiequalitytoolbox.com
SUMMARY:Testing AI Safety: Why Current Guardrails Fail to Stop Social Bias with Anna-Maria Gueorgiueva | AI & Equality Pub-Talk
DESCRIPTION:Access paper: https://arxiv.org/abs/2512.19238 \nHow do large language models understand the lived experiences of stigmatized groups\, and when does this understanding differ from the human perspective? Can this lead to bias\, and if so\, do our existing safety tools help mitigate such bias? This work investigated open-source language models for bias against 93 stigmatized groups\, identifying that specific types of biases (especially those deemed by humans to be ‘threatening’ such as having HIV or a criminal record) experience significantly more bias than other types of stigmatized identities. To attempt to remedy this\, we test guardrail models\, models from leading technology companies that are meant to identify discriminatory or bias-eliciting inputs and mitigate harmful outputs. This talk will report on our findings\, identifying where existing guardrail models fail and discussing technical and legal solutions. \nAbout the speaker:\nAnna-Maria Gueorguieva is a PhD student at the University of Washington Information School and holds B.A. in Data Science and Legal Studies from UC Berkeley. Her research focuses on AI evaluations for social and political impacts and AI regulation. Her work lies at the intersection of empirical methods to investigate AI usage and behavior in combination with the necessary AI regulations needed to limit and remedy harm.\n\n\n\n\n\n\n\n\n\n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/testing-ai-safety-why-current-guardrails-fail-to-stop-social-bias-with-anna-maria-gueorgiueva-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/6.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20260630T150000
DTEND;TZID=Europe/London:20260630T160000
DTSTAMP:20260411T154756
CREATED:20260218T104809Z
LAST-MODIFIED:20260218T104836Z
UID:10000032-1782831600-1782835200@aiequalitytoolbox.com
SUMMARY:Moving Beyond Big Tech: Blueprints for Community-Led Language AI with Claudia Pozo | AI & Equality Pub-Talk
DESCRIPTION:Paper: TBD (available in June) \nThe digital world suffers from a profound linguistic disparity\, particularly in Africa where a lack of local language content and traditional\, Global North-led language technology models fail to meet community needs\, often resulting in data extraction and inequitable solutions. In an 18-month research project\, in collaboration with the Distributed AI Research Institute (DAIR)\, we highlight a powerful alternative: a growing grassroots movement of community-based language technology initiatives across Africa that adopt a bottoms-up approach\, prioritizing local needs and incorporating indigenous philosophies. This approach centers technology as an act of collective creation and community survival\, yet it faces significant challenges\, including a heavy reliance on Global North funding that can conflict with goals for self-determination and critical concerns around data governance and ownership in regions with underdeveloped legal frameworks. Ultimately\, the research advocates for a fundamental shift in technological practice to support these community-centered development models\, providing blueprints for the Global Majority to decolonize AI. \nAbout the speaker:\nClaudia Pozo is Language Justice Co-Lead at Whose Knowledge? She’s a South American brown feminist\, multifaceted activist\, researcher\, social scientist and strategist\, whose work is grounded in knowledge and language justice. She holds an MPhil in Development Studies and a BA in Communications. \n\n\nRegister here via our community on Circle
URL:https://aiequalitytoolbox.com/event/moving-beyond-big-tech-blueprints-for-community-led-language-ai-with-claudia-pozo-ai-equality-pub-talk/
ATTACH;FMTTYPE=image/png:https://aiequalitytoolbox.com/wp-content/uploads/2026/02/5.png
END:VEVENT
END:VCALENDAR