AI and Equality

Library

LLM Bias Community Project

About the Talk

Data Scientist Sofia Kypraiou presents the first AI & Equality data + social science community project, focussed on implications of algorithmic biases in natural language processing (NLP) systems. The first 3 research questions focus on gender bias in 4 large language models— both investigating and visualizing the bias.

The experiment is ongoing and open to all!—anyone from social qualitative scientists to computer quantitative scientists can participate and run a few discrete experiments.

 

About the Speaker

Sofia Kypraiou is a Data scientist. She developed the < AI & Equality Toolbox >  in collaboration with EPFL and the OHCHR in Geneva.

Recommended resources

→ Join the AI & Equality Community on Cirle: https://community.aiequalitytoolbox.com 

→ Framework: Radzi, N. S. M., & Musa, M. (2017). Beauty ideals, myths and sexisms: A feminist stylistic analysis of female representations in cosmetic names. GEMA Online Journal of Language Studies, 17(1).

Kumar, S. H., Sahay, S., Mazumder, S., Okur, E., Manuvinakurike, R., Beckage, N., & Nachman, L. (2024). Decoding biases: Automated methods and llm judges for gender bias detection in language models. arXiv preprint arXiv:2408.03907.

Cai, Y., Cao, D., Guo, R., Wen, Y., Liu, G., & Chen, E. (2024, August). Locating and mitigating gender bias in large language models. In International Conference on Intelligent Computing (pp. 471-482). Singapore: Springer Nature Singapore.

Sant, A., Escolano, C., Mash, A., De Luca Fornaciari, F., & Melero, M. (2024). The power of prompts: Evaluating and mitigating gender bias in mt with llms. In A. Faleńska, C. Basta, M. Costa-jussà, S. Goldfarb-Tarrant, & D. Nozza (Eds.)

 

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.