The rapid advancement of Artificial Intelligence (AI) brings incredible potential, but also critical questions about fairness, bias, and human rights. How can we ensure AI systems truly serve humanity? According to Emma Kallina, a leading researcher in the AI Inequality Community, the answer lies in meaningful community engagement.
Kallina, whose PhD research focuses on involving affected communities in response efforts, recently shared invaluable insights on fostering human rights-respecting AI systems. Her core message? Empower communities throughout the entire AI development process.Beyond the Boardroom: Understanding True AI Impact
It’s easy to think of AI’s impact solely on its direct users or the developers behind it. However, Kallina emphasizes the need to identify all “affected communities” – a diverse group often overlooked.
Consider an AI tool designed to classify MRI scans. Its impact extends far beyond just clinicians. It affects:
- Patients and their families: Their health outcomes and peace of mind are directly tied to the AI’s accuracy and fairness.
- Hospitals and IT administrators: They manage the infrastructure and data.
- Nurses: They interact with patients and the technology.
- Regulators: They set the standards for safe and ethical AI.
Kallina stresses a crucial point: we must actively seek out those “not already in the room” – individuals deeply affected by AI but currently lacking influence in its development. This involves careful stakeholder mapping, considering users and non-users, rights holders, duty bearers, and power structures.The Power of Participation: Why Engaging Communities Matters
Drawing on human rights principles, Kallina highlights participation and inclusion as fundamental. Research on responsible AI guidance identifies five key benefits of involving affected communities:
- Rebalancing Decision Power: Shifting influence to those most impacted.
- Improving Contextual Understanding: Gaining vital insights into diverse needs and environments.
- Enhancing Risk Anticipation: Identifying potential harms before they occur.
- Building Public Understanding and Trust: Fostering confidence in AI systems.
- Facilitating Post-Deployment Scrutiny: Ensuring ongoing accountability.
Ultimately, community engagement helps to rebalance power, priorities, and knowledge – addressing critical asymmetries that often plague AI development.From Consultation to Co-Creation: The Ladder of Engagement
True engagement isn’t a one-off event; it can (and should) occur at every stage of the AI lifecycle, from defining objectives to post-deployment evaluation. However, successful engagement requires a deliberate process: the right attitude, sufficient resources, and a clear scope of involvement.
Kallina referenced the “ladder of citizen participation,” adapted for AI, which illustrates different levels of engagement:
- Consult: Simply extracting information without shared objectives. (Sadly, industry often defaults to this level for commercial interests.)
- Include: Showing prototypes for feedback.
- Collaborate: Co-developing AI models with communities.
- Own: Where the community truly drives the system’s purpose.
The goal is to move beyond superficial consultations towards genuine collaboration and empowerment.Practical Steps for Meaningful Engagement
So, how can organizations achieve this deeper engagement? Kallina outlined several practical strategies:
- Diverse Engagement Methods: Utilizing observation, surveys, interviews, focus groups, prototype testing, and public deliberation.
- Strategic Recruitment: Deliberately focusing on groups holding less power, considering protected characteristics and contextual vulnerabilities. “Hub people” (e.g., charity workers) can be invaluable for accessing vulnerable communities safely.
- Shared Knowledge Base: Developers demystifying AI for communities, and in turn, understanding community context and terminology.
- Effective Communication: Storing learned insights, prioritizing them, and crucially, informing stakeholders how their input was used (or why it wasn’t).
Kallina also recommended valuable tools and resources, including IDEO’s playbook for stakeholder involvement and the Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems from the Alan Turing Institute.The Future of Ethical AI
This talk provides a comprehensive and thought-provoking look at the critical role of community engagement in building ethical and equitable AI. By prioritizing the voices of those most affected, we can move beyond simply developing powerful technologies to creating AI systems that truly serve and uplift all of humanity.