Large data have accelerated advances in AI. While it is well known that
population differences from genetics, sex, race, diet, and various
environmental factors contribute significantly to disease, AI studies in
medicine have largely focused on locoregional patient cohorts with less diverse
data sources. Such limitation stems from barriers to large-scale data share in
medicine and ethical concerns over data privacy. Federated learning (FL) is one
potential pathway for AI development that enables learning across hospitals
without data share. In this study, we show the results of various FL strategies
on one of the largest and most diverse COVID-19 chest CT datasets: 21
participating hospitals across five continents that comprise >10,000 patients
with >1 million images. We present three techniques: Fed Averaging (FedAvg),
Incremental Institutional Learning (IIL), and Cyclical Incremental
Institutional Learning (CIIL). We also propose an FL strategy that leverages
synthetically generated data to overcome class imbalances and data size
disparities across centers. We show that FL can achieve comparable performance
to Centralized Data Sharing (CDS) while maintaining high performance across
sites with small, underrepresented data. We investigate the strengths and
weaknesses for all technical approaches on this heterogeneous dataset including
the robustness to non-Independent and identically distributed (non-IID)
diversity of data. We also describe the sources of data heterogeneity such as
age, sex, and site locations in the context of FL and show how even among the
correctly labeled populations, disparities can arise due to these biases