In this Open Studio, Emma Kallina, a PhD researcher at the University of Cambridge and visiting fellow at the Alan Turing Institute, walks us through her multi-year journey investigating how to meaningfully involve stakeholders in AI development. Her motivation stems from a frustration many in the space share: responsible AI principles are everywhere, but their real-world implementation is vague, inconsistent, and ineffective.
Emma began her PhD as governments and companies were enthusiastically publishing responsible AI guidelines. But these high-level principles—transparency, fairness, accountability—were often too abstract, internally contradictory, lacking accountability structures, and nearly impossible to measure. Her insight: perhaps the people affected by AI systems should help define what these principles mean in context.
Drawing on her background in psychology, UX research, and HCI, Emma frames stakeholder involvement as a bridge between abstract ideals and grounded practice. She emphasizes the importance of external stakeholders—especially those affected by AI systems without ever using them—and notes their exclusion from most current development processes.
Her research question is straightforward: How can we achieve more meaningful stakeholder involvement in AI development?
To explore this, Emma’s work unfolds in four layers:
- The Promise
By conducting a literature review across 26 global organizations involved in responsible AI, Emma identified five key benefits of stakeholder involvement: rebalancing decision power, understanding socio-technical context, anticipating risks, increasing trust and public understanding, and enabling public scrutiny. These benefits are central to responsible AI—but are they being realized in practice? - The Reality
Through a survey of 130 AI practitioners and 10 follow-up interviews, Emma uncovers the real drivers behind current stakeholder engagement. Most cited motivations were increasing customer value and legal compliance. Very few teams involved affected communities to understand broader societal dynamics or power imbalances. Notably, stakeholder influence was limited, involvement often came late (e.g., during usability testing), and many stakeholders—especially affected non-users—were barely involved at all. - The Gap
Emma mapped these practices against the five benefits and found a stark mismatch. Most current approaches contribute very little toward the high-level goals organizations claim to support. In short, stakeholder involvement is happening, but not in a way that redistributes power, ensures inclusion, or affects early-stage decision-making. - The Path Forward
Emma developed a process framework adapted from healthcare to AI, identifying stages like organizational readiness, inclusive recruitment, knowledge sharing, internal communication, and decision conflict resolution. She plans to pilot this with AI teams to test existing tools and co-design new ones—focusing on real-world implementation.
Emma’s ongoing work aims to help practitioners move from checkbox exercises toward meaningful stakeholder collaboration. Involving stakeholders early, often, and with real decision-making power is not just a nice-to-have—it’s essential for truly responsible AI.
About Emma Kallina
Emma Kallina is a third-year PhD student at the University of Cambridge, her research focuses on how the involvement of affected communities during the development of AI systems can support the development of more responsible systems in practice. Her background is in Psychology, Human-Computer Interaction, and UX Research. She is a visiting researcher / fellow at the Institute for Technology & Humanity and the Alan Turing Institute.