AI and Equality

News

African AI & Equality Toolbox: Introduction & Stage 1

News | Researchers Daisy Salifu and Joel Nwakaire present Uganda’s Cassava Project & Nigeria’s Yellow Pepper Project

The African AI and Equality Toolbox webinar series is a groundbreaking initiative demonstrating human rights-centered AI development across African contexts. This series emphasizes a crucial shift from importing global North AI models to cultivating uniquely African approaches. This webinar delved into Stage One of AI development: objective setting and team composition, highlighting the profound impact of community involvement on the success and sustainability of AI projects.

The Foundation of AI: Objectives and Team Composition

The speakers, Prof. Daisy Salifu and Prof. Joel Nwakaire, underscored that Stage One is the foundation upon which all subsequent AI development rests. They argued that many AI harms stem from the exclusion of key voices at the decision-making table. When objectives are set bydevelopers rather than with communities, the resulting systems often fail to address the true, priority needs of the users.

Key issues highlighted for Stage One include:

  • Purpose and Context: Who defines the problem? Are developers making assumptions, or are problems identified and co-identified with communities?
  • Historical Context: Are power imbalances or past harms in a particular space being considered?
  • Stakeholder Mapping: Who will be affected, and who is missing from the decision-making table?
  • Team Composition: Intentional representation of marginalized communities is crucial. Teams that include diverse voices are more responsive and create more effective systems.
  • Decision-Making Power: Is there genuine participation in shaping objectives, not just perfunctory consultation?
  • Expertise Recognition: Technical knowledge and lived experience must be valued as equal contributions.
  • Safe Spaces: Ensuring all voices are heard authentically, addressing barriers like language through vernaculars and inclusive dialogue.
  • Objective Setting: Prioritizing community-driven objectives over imposed ones.
  • Participatory Processes: Utilizing tools for authentic engagement from project inception.
  • Power Dynamics: Addressing how gender, age, disability, and economic status affect inclusion.
  • Success Metrics: Measuring both technical performance and community empowerment.


Red Flags to Watch Out For:
Innovators should be wary of:

  • Homogeneous Teams: A lack of diversity in thinking can lead to narrow solutions.
  • Pre-set Objectives: Making assumptions about problems before engaging with affected communities. Consultation should precede key decisions.

Case Studies in Action: Uganda’s Cassava Project & Nigeria’s Yellow Pepper Project

Prof. Salifu shared insights from an AI tool developed in Uganda for cassava disease detection. While the tool was technically sound, dialogue with women farmers revealed a critical misalignment: their priority need was soil nutrient and pathogen analysis, not disease detection on leaves. This highlighted that even highly accurate tools will fail to achieve sustainable adoption if they don’t address the users’ most pressing concerns.

Prof. Nwakaire presented the Nigeria Yellow Pepper Project, a new initiative born directly from community needs. This project intentionally centered women farmers, who constituted 90% of the yellow pepper growers. Through structured community dialogues, held in safe and transparent village settings, the team uncovered that farmers were knowledgeable about diseases but needed assistance with early detection when they weren’t in the field. This led to the co-design of a standalone, SMS-based system that monitors farms and alerts farmers to potential infestations, a solution perfectly suited to local network and device limitations. The project also expanded to include IoT for irrigation and soil monitoring, and an e-extension service, demonstrating a holistic approach driven by community input.

Key Takeaways for AI Teams: Both professors emphasized crucial lessons for future AI development:

  1. Start with Farmer’s Priority Setting: Involve gender experts and social scientists from the outset to facilitate genuine community dialogue and identify priority needs.
  2. Acknowledge Potential for Exclusion: Recognize that all innovations can exclude certain populations and take deliberate action to include marginalized voices.
  3. Community Dialogue is Essential: It may seem “slow,” but this initial investment ensures sustainable adoption and wider impact. It’s better to be slow and effective than fast and irrelevant.
  4. Build Trust and Recognize Indigenous Knowledge: Researchers must approach communities with humility, acknowledging their lived experiences and local knowledge as vital contributions to innovation.
  5. Train and Empower: Provide comprehensive training, co-developing modules based on the community’s specific needs and skill levels, especially for digital literacy.
  6. Scale the Process, Not Just the Product: Sustainable scaling involves extending the entire co-creation process with the community, ensuring their continued involvement and ownership.


AI development is about community building, not just technology deployment. These African case studies offer powerful evidence that by prioritizing inclusion, listening deeply to lived experiences, and fostering genuine partnerships, AI can truly serve the needs of all.

The official launch of the African AI & Equality Toolbox and its accompanying webinar series marks a pivotal moment for AI development across the continent. This groundbreaking initiative, a collaborative effort between Women at the Table, the African Center for Technology Studies (ACTS), and numerous partners, aims to foster more equitable and rights-centered AI innovation in Africa.

 

Register for the Webinar Series:
1PM GMT | 3PM SAST | 4PM EAT  

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.