AI and Equality

News

Seeing What’s Missing: Bringing Responsible AI Research Gaps into Focus Through A Landscape Analysis

Anna Neumann presents the findings of a landscape analysis on Responsible AI conducted across major conferences like FAccT, AIES, and CHI. 

In a recent pub talk, Anna Neumann, a second-year PhD researcher at the RC Trust—a collaborative AI research initiative in Germany—presented a compelling talk titled “Seeing What’s Missing: Bringing Responsible AI Research Gaps Into Focus Through Landscape Analysis.” Though the title is a mouthful, the mission is clear: to map and make sense of the rapidly evolving world of Responsible AI.

Anna’s research group, Compliant and Accountable Systems, views AI as a sociotechnical system—intertwining technology with the complex realities of human interaction and societal impact. With a background in electrical engineering and a current focus on critical computing, she brings a multidisciplinary lens to this work.

Her project involved a systematic review of over 1,000 paper abstracts from major human-centered AI conferences like AIES, FAccT, and CHI. The aim? To understand what themes dominate Responsible AI research—and just as importantly, what’s missing. The analysis distilled the field into what Anna calls the “Three Eyes” of Responsible AI: Interaction, Implication, and Intervention.

Interaction explores how humans perceive, understand, and engage—directly or indirectly—with AI systems.

Implication examines the societal consequences of AI, from fairness and harm to shifts in power structures.

Intervention focuses on how researchers attempt to improve AI systems—through audits, new design methods, or policy guidance.

One surprising insight was how many papers aimed to intervene, often proposing tools or frameworks to make AI systems more fair or transparent. However, Anna also flagged notable gaps: vague discussions of “harm” with few specifics, and a lack of research on upstream (developer-side) and downstream (affected community) impacts.

The analysis revealed several key findings. Firstly, interventions are a significant focus, with many researchers offering solutions to improve AI. Secondly, there’s a greater emphasis on the implications of AI than on interactions, even though CHI is a prominent conference focused on human-computer interaction. Major areas of research include bias/fairness, system evaluations, and direct interactions with AI. However, gaps exist, such as the specific identification of harms and risks and a lack of analysis of upstream and downstream implications.

Anna’s research aims to help various groups, including researchers, practitioners, policymakers, affected communities, and industry stakeholders, understand and navigate the complex landscape of Responsible AI. Looking ahead, Anna predicts increased use of foundation models, AI as a Service (AIaaS), human-AI teamings, Generative AI content, and over-reliance on AI. This means that the research challenges will include reproducibility, data transparency, blackbox access, unclear regulations, the fast-changing landscape, and inaccessibility of compute.

About Anna Neumann:

Anna Neumann is a PhD researcher at the Compliant and Accountable Systems Group at RC TRUST, where she examines how AI systems influence societal power structures with a focus on responsible development. She studied Information and Communication Technology at Ruhr-University Bochum and now combines technical expertise with social science perspectives in her research.

About the author

Amina Soulimani

Programme Associate at Women At The Table and Doctoral Candidate in Anthropology at the University of Cape Town

 

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.