About the video
In this excellent case study Francesca Lucchini of the National Center for Artificial Intelligence (CENIA) in Chile walks through how multidisciplinary teams, essential questions, interrogating stereotypes and more can help researchers catalyze agency and a human rights perspective in their work by beginning with decisions on what is measurable.
About the author
Francesca Lucchini is part of the tech transfer team at the National Center Artificial Intelligence, Chile (CENIA). She studied computer science as a pre-grad student at Pontificia Universidad Católica de Chile. Then continued her studies at the same university with a master’s degree in Engineering, focusing on AI applied to city graphs.
Gender/Age Bias Resources
→ Brandao, Martim. “Age and gender bias in pedestrian detection algorithms.” arXiv preprint arXiv:1906.10490. Appeared at the Workshop on Fairness Accountability Transparency and Ethics in Computer Vision (FATE CV) at CVPR 2019 (https://arxiv.org/pdf/1906.
→ Wu, Wenying, et al. “Gender classification and bias mitigation in facial images.” Proceedings of the 12th ACM Conference on Web Science & CVPR. 2020 (https://arxiv.org/abs/2007.
→ StrongSORT: Make DeepSORT Great Again (https://arxiv.org/pdf/2202.
→ Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking (https://arxiv.org/pdf/2203.
→ ByteTrack: Multi-Object Tracking by Associating Every Detection Box (https://arxiv.org/pdf/2110.
→ BoT-SORT: Robust Associations Multi-Pedestrian Tracking (https://arxiv.org/pdf/2206.
We are committed to advancing human rights-based approaches in AI & encourage anyone interested in learning more from a global perspective to explore and contribute to our community!