About the Talk
Federated learning (FL) and travelling model (TM) allow privacy-preserving model training across sites without sharing patient-sensitive data. While both approaches have shown success, they face unique challenges related to distribution shifts between sites. To address this, we propose FedTM, a hybrid framework combining the strengths of FL and TM. FedTM begins with FL warmup training at sites with larger datasets, followed by sequential refinement through TM across all sites. We evaluated FedTM for Parkinson’s disease classification using 1,817 brain scans from 83 international sites. Model performance, misclassification disparities, and communication costs were computed and compared to standard FL and TM approaches. Our results reveal that FedTM improves AUROC from 77±0.01% to 82±0.01%, reduces misclassification disparities from 34±0.01% to 26±0.01%, and decreases training load for smaller sites from 22 to 12 cycles. These advancements mark an important step toward promoting global healthcare equity and advancing responsible AI development.
About the Speaker
Raissa Souza is a postdoctoral associate in the Medical Image Processing and Machine Learning Laboratory with a PhD in Biomedical Engineering with medical imaging specialization (2024) from the University of Calgary. She holds a BSc in Computer Science from São Paulo State University (2017) and studied abroad at the University of California, San Diego. Her interest in applying computer science and engineering methods to improve medical care began during research visits to Simon Fraser University (2014) and the University of California, Los Angeles (2015). She also has four years of industry experience as a software engineer.
Her research develops privacy-preserving AI for healthcare, enabling AI models to be trained across multiple healthcare centres without ever sharing patient data. Her work also tackles key challenges with respect to data and model biases and sociotechnical considerations.