AI and Equality

Blog

FedTM: The Hybrid AI Model Boosting Performance and Digital Health Equity

Raissa Souza demonstrates that by recognizing and designing for the limitations of less-resourced sites, the FedTM model allows their unique data to contribute to AI development.

We recently had the pleasure of hosting Raissa Souza on a truly insightful talk on AI and Equality, featuring a brilliant presentation on a new hybrid model called FedTM (Federated Traveling Model). This innovative approach offers a promising solution to one of the biggest challenges in AI development, particularly in healthcare: the data challenge and the resulting health inequities. 

Read the paper: Combining federated learning and travelling model boosts performance and opens opportunities for digital health equity

The core problem, as Souza eloquently explained, is that AI models need vast, representative datasets to be accurate and useful. In healthcare, this often means collecting data from many hospitals. The traditional method, centralized learning—where all data is pooled into one big repository—is costly, time-consuming, and runs into serious data privacy and sovereignty issues. Even worse, these large repositories are often created by wealthy nations, leading to models that perform poorly for populations in less-resourced regions.

Enter Distributed Learning, an approach where the data stays local, and only the model travels or is shared. Two main distributed methods exist:

  1. Federated Learning (FL): Sites train their own models in parallel, and a central server aggregates the knowledge. However, FL requires each participating site to have enough data and representation of all classes (e.g., healthy and sick) to train a meaningful model. This requirement often excludes smaller or remote clinics.
  2. The Traveling Model (TM): A single model travels sequentially from site to site, refining itself with each new local dataset. Crucially, TM has no minimum data requirement, making it far more inclusive for sites with very little or incomplete data.

The genius of FedTM is that it smartly combines the strengths of both FL and TM to promote digital health equity.

The Two-Stage Approach to Equity: 

The FedTM approach is a two-stage process designed to get the best performance while minimizing the burden on less-resourced sites 

Phase 1: The Federated Warm-up. This initial phase is pure Federated Learning. Only the better-resourced sites (those with enough data to meet the FL requirements) participate. They collaboratively train a strong foundational model. The computational burden for this resource-intensive step is placed entirely on the sites best equipped to handle it.

Phase 2: The Traveling Model Refinement. Once the foundational model is ready, it enters the TM phase. Here, the model sequentially visits all sites, including the smaller, less-resourced ones that were excluded from Phase 1. Since the TM approach has no data size restrictions, these sites can now contribute their unique and vital data to refine the model, increasing its representativeness and robustness.Why FedTM is a Game-Changer for Equity

The study, using a Parkinson’s disease case study with sites of vastly different data sizes, showed three critical outcomes:

  1. Boosted Performance: FedTM consistently outperformed the Traveling Model alone (the only distributed method that could even work on the highly imbalanced dataset), achieving a 3-4% improvement across all clinical metrics.
  2. Fairer Outcomes: By smartly combining the methods, FedTM significantly reduced the performance disparity between larger and smaller sites, achieving a more balanced classification rate. This is key: the model isn’t just better; it’s fairer to the populations served by the smaller clinics.
  3. Alleviated Burden: By shifting the most resource-intensive training (the federated warm-up) to the better-resourced sites, FedTM effectively reduced the amount of training required from the smaller, less-resourced clinics. This is a crucial step in promoting a more equitable distribution of the computational load.

The takeaway: FedTM acts as a powerful democratization force. By recognizing and designing for the limitations of less-resourced sites, it allows their unique data to contribute to AI development. As Souza concluded, if we want technology to truly improve healthcare access for everyone, “we need to embrace the differences instead of silence them.” FedTM is a huge step in developing systems that consider the specific context of every population.

Raissa Souza is a postdoctoral associate in the Medical Image Processing and Machine Learning Laboratory with a PhD in Biomedical Engineering with medical imaging specialization (2024) from the University of Calgary. She holds a BSc in Computer Science from São Paulo State University (2017) and studied abroad at the University of California, San Diego. Her interest in applying computer science and engineering methods to improve medical care began during research visits to Simon Fraser University (2014) and the University of California, Los Angeles (2015). She also has four years of industry experience as a software engineer. 

Her research develops privacy-preserving AI for healthcare, enabling AI models to be trained across multiple healthcare centres without ever sharing patient data. Her work also tackles key challenges with respect to data and model biases and sociotechnical considerations.

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.