Model initialization techniques are vital for improving the performance and
reliability of deep learning models in medical computer vision applications.
While much literature exists on non-medical images, the impacts on medical
images, particularly chest X-rays (CXRs) are less understood. Addressing this
gap, our study explores three deep model initialization techniques: Cold-start,
Warm-start, and Shrink and Perturb start, focusing on adult and pediatric
populations. We specifically focus on scenarios with periodically arriving data
for training, thereby embracing the real-world scenarios of ongoing data influx
and the need for model updates. We evaluate these models for generalizability
against external adult and pediatric CXR datasets. We also propose novel
ensemble methods: F-score-weighted Sequential Least-Squares Quadratic
Programming (F-SLSQP) and Attention-Guided Ensembles with Learnable Fuzzy
Softmax to aggregate weight parameters from multiple models to capitalize on
their collective knowledge and complementary representations. We perform
statistical significance tests with 95% confidence intervals and p-values to
analyze model performance. Our evaluations indicate models initialized with
ImageNet-pre-trained weights demonstrate superior generalizability over
randomly initialized counterparts, contradicting some findings for non-medical
images. Notably, ImageNet-pretrained models exhibit consistent performance
during internal and external testing across different training scenarios.
Weight-level ensembles of these models show significantly higher recall
(p<0.05) during testing compared to individual models. Thus, our study
accentuates the benefits of ImageNet-pretrained weight initialization,
especially when used with weight-level ensembles, for creating robust and
generalizable deep learning solutions.Comment: 40 pages, 8 tables, 7 figures, 3 supplementary figures and 4
supplementary table