AI and Equality

News

"Come to us first": Centering Community Organizations in
Artificial Intelligence for Social Good Partnerships

Hongjin Lin explored the realities and tensions of AI for Social Good (AI4SG) initiatives, raising critical questions about who truly benefits from these efforts.

In a recent Pubtalk, Hongjin Lin explored the realities and tensions of AI for Social Good (AI4SG) initiatives, raising critical questions about who truly benefits from these efforts. Drawing from both personal experience and a research study she led, Lin offered a grounded and nuanced view of how AI technologies are being integrated into community-based projects—and the complex power dynamics that underlie them. She presented their published paper  “Come to us first”: Centering Community Organizations in Artificial Intelligence for Social Good Partnerships”

Lin began by noting the upward trend in AI for Social Good initiatives, particularly since 2020. These efforts often emphasize partnerships between AI developers and community organizations, assuming that the latter will benefit from access to advanced technologies. But, Lin asked, do they?

This question became central to her research. Lin recounted her time working with the U.S. Environmental Protection Agency (EPA), where a high-performing algorithm was ultimately never implemented due to concerns about distributive impacts and human-computer interaction challenges. This experience led her to investigate broader patterns of implementation failure and misalignment in AI4SG projects.

Their study, based on 16 semi-structured interviews with staff from a diverse set of community organizations, reveals a more complicated picture than the success stories often featured in tech media or corporate reports. These organizations had partnered with technology teams on projects ranging from citizen complaint triaging to bee behavior modeling, with funding structures that varied widely—from short summer programs to year-long grants.

Despite their different contexts, most participants expressed similar frustrations: short timelines, top-down decision-making, and a lack of sustained support. Often, Lin noted, projects are shaped more by the interests of funders or research labs than by the actual needs of the communities they aim to support.

To analyze these findings, Lin applied the data feminism framework, which challenges dominant power structures in data science and centers marginalized voices. One key principle—examining power—helped her interrogate how decisions around AI tool development are made, and whose knowledge counts. In many cases, community organizations contributed critical data, domain expertise, and contextual insight, yet their goals were often sidelined in favor of more technically driven outcomes.

Lin also highlighted the issue of epistemic injustice, where technology expertise is privileged over the lived experience and knowledge of community partners. Participants reported being pressured to adopt AI solutions because that’s where the funding was—regardless of whether those tools were appropriate or impactful.

Still, Lin emphasized that many organizations found value in these partnerships—not necessarily through successful product deployments, but through the opportunity to learn, explore, and build relationships. Only 2 of the 14 projects studied led to actual system deployment, yet participants spoke of the importance of being included in the process and building capacity along the way.

Lin concluded by calling for a “relationship-first” approach to AI for Social Good, one that emphasizes mutual respect, early engagement, and shared decision-making. As one participant poignantly put it: “Our dream is that before a research institute decides to do something, they come to us first and ask, ‘What do you need?’”

Her talk served as a timely reminder that for AI4SG to live up to its promise, it must prioritize the people it aims to serve—not just the technologies it seeks to develop.

About Hongjin Lin 

Hongjin Lin is a 3rd-year Ph.D. Candidate in Computer Science, advised by Professor Krzysztof Gajos. Her research lies at the intersection of AI and social justice, through both qualitative critical evaluation and technology development. She draws on feminist epistemology like Data Feminism and thoughtful community-based research methods that center relationships with people and nature. Hongjin was born and raised in Guangzhou, China, and moved to the US for her undergraduate degree in Mathematics and Computer Science at Occidental College. She received her master’s degree in Data Science from the London School of Economics and Political Sciences. 

Before Harvard, she worked as a research fellow at Stanford Law School, developing and evaluating Machine Learning systems for environmental policy enforcement in partnership with the EPA. Outside of research, she has completed projects with nonprofits in China, the US, and Malawi and worked as a data for development intern at UNDP in New York and Costa Rica. She is a dedicated yogi, dancer, community-living member, plant mama, and a happy camper whenever she gets the chance.

About the author

Amina Soulimani

Programme Associate at Women At The Table and Doctoral Candidate in Anthropology at the University of Cape Town

 

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.