AI and Equality

Re-visions of 
Now and Future II

Community Publication #2

We continue our work to provide a space for authors from diverse disciplines and backgrounds to showcase new perspectives and advocate for what we want our future to look like.

Our #2 Community Publication features essays from the AI & Equality Global Community and participants of our AI & Equality Summer School 2024.

Executive Summary

Our publication opens with When AI Materializes the Fantasies of the Far Right, a critical examination of how extremist political factions manipulate AI to spread misinformation. This sets the tone for a broader discussion on governance in AI, followed by The Effectiveness and Future of United States Executive Orders to Promote Human-Rights Based Artificial Intelligence, which evaluates the potential of U.S. executive orders to regulate AI and gear it towards applications that affirm human rights. 

Supporting Trauma-Informed Digital Connections to Enable Victims of Trafficking to Access Legal Services in Thailand highlights a use case where AI is being developed to help marginalized communities, showcasing the potential of AI for social good. The next essays, Is Team Diversity Enough When Creating AI Systems that Do Not Violate Human Rights? and Are We Subjected to the Discriminatory Rulings of Machines? both critically question AI implementation. The former advocates for equity at all levels of power, while the latter asks whether AI’s discriminatory outcomes can ever be fully addressed.

Evaluating Sof+IA in Light of Data Feminism Principles examines how ethical AI can be designed and audited by using the Sof+IA chatbot as a case study, framed through the principles from Catherine D’Ignazio and Lauren Klein’s book Data Feminism.  Meanwhile, Reproduction of Stereotypes in AI Systems: Problem or Symptom? analyzes AI’s tendency to perpetuate social inequities through the lens of fairness definitions, a theme deepened by Much Distress and Little Relief, which critiques the predictive optimization used to distribute welfare benefits in South Africa, leaving vulnerable populations unsupported.

Our publication concludes with Can bias in LLMs be used for good? which prompts us to reconsider the very framework through which we analyze AI. If we can leverage the existing biases in large language models and use them to more thoroughly interrogate massive datasets, we could be closer than ever to mitigating some of the harmful effects of AI. 

Read the chapters below

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.