AI and Equality

News

Beyond Metrics: Intersectionality & The Future of AI Fairness

Steven Vethman on why quantifying bias is insufficient for AI fairness.

Learn five actionable steps to integrate intersectionality, Black feminist theory, and social justice into ethical AI development.

In a recent AI & Equality Open Studio, Steven Vethman observes a growing momentum in the push for AI fairness over the past few years, shifting the focus from merely building powerful algorithms to critically examining: Is this software working well for everyone?

Much of this crucial work, including that done by many in the AI Equality community, has centered on fairness metrics—quantifying errors and biases to ensure non-discrimination. Yet, as Vethman, alongside critical scholars, argues, this approach, while necessary, is insufficient for achieving true ethical AI.

Vethman points to the social welfare scandal in the Netherlands, where algorithms unfairly targeted 27,000 families for fraud with devastating consequences. The burden was not equally distributed; Black single mothers were disproportionately affected. Vethman emphasizes that a metric on “Black people” and a metric on “women” cannot capture the complex, compounding discrimination faced by Black single mothers. This example highlights the limitations of current approaches to addressing AI bias.

This reality, Vethman asserts, necessitates embracing intersectionality. Rooted in Black feminist theory, intersectionality is not simply about adding more metrics; it is fundamentally about social justice in AI. It requires moving beyond measuring error rates for specific subgroups and instead asking: Who gets to decide whether this AI is used? Can an affected person contest the decision? Will deploying this AI further marginalize or make Black women more invisible?

When Vethman and colleagues brought this critique to focus groups with AI experts, the feedback was revealing. The recommendations—such as “metrics are not the solution”—were often perceived as an attack, deemed “impractical,” and “outside of scope.” Vethman notes that this resistance stemmed from a foundational mismatch: the critiques often failed to provide actionable first steps that fit within the current AI development workflow.

Vethman and the team realized the goal was not to tear down existing practice, but to bridge the gap with actionable, accessible steps to implement AI fairness. Inspired by robust literature and collaborative focus groups, Vethman developed five iterative themes for an intersectional approach that AI experts can implement immediately:

  1. Insist and Collaborate: Vethman stresses that practitioners hold the power to insist on a diverse, interdisciplinary team. They should not bear the full ethical responsibility alone. They must bring in social scientists, ethicists, and—crucially—those with lived experience. Vethman reminds the audience that quantitative methods are not the only, or even always the best, starting point for ethical AI.

  2. Position and Reflect: Vethman advises teams to discuss and document the different assumptions and worldviews within the group. They should write a positionality statement: who are we, who will this AI impact, and what is our relation to them? Vethman suggests keeping this documentation of pluriformity throughout the project lifecycle.

  3. Invite, Don’t Make it Happen (Participation): The focus, Vethman writes, should be on genuine co-ownership and influence. Practitioners must pay for participation and ensure full transparency—not just of the code, but of the why—the purpose and justification for the system’s existence. This is key for advancing social justice in AI.

  4. Power in Social Context: Vethman encourages looking beyond the data. Practitioners should use exercises like persona mapping to understand who benefits, who decides, and who is most negatively impacted by the system within existing power structures. This helps uncover systemic AI bias.

  5. Critique the Objective (AI?): Vethman calls for questioning the “zero question”: Should AI be used at all? Practitioners must compare the potential of using AI with the current process and other non-AI alternatives. For example, instead of asking “How can we make facial recognition fairer?” Vethman suggests asking, “How can we ensure safety in city hall without creating a problematic surveillance database?”


Vethman concludes that applying an intersectional approach means realizing that the technical task is nested within a complex social reality. It shifts the starting question from
How can we do it?” to “Do we want to do it?” or even “What are the underlying reasons people are being excluded?”


Vethman asserts that this critical reflection is a necessary first step towards building AI systems that serve justice, not merely efficiency, and encourages a collective shift towards truly ethical AI.

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.