30 research outputs found
A community-powered search of machine learning strategy space to find NMR property prediction models
The rise of machine learning (ML) has created an explosion in the potential
strategies for using data to make scientific predictions. For physical
scientists wishing to apply ML strategies to a particular domain, it can be
difficult to assess in advance what strategy to adopt within a vast space of
possibilities. Here we outline the results of an online community-powered
effort to swarm search the space of ML strategies and develop algorithms for
predicting atomic-pairwise nuclear magnetic resonance (NMR) properties in
molecules. Using an open-source dataset, we worked with Kaggle to design and
host a 3-month competition which received 47,800 ML model predictions from
2,700 teams in 84 countries. Within 3 weeks, the Kaggle community produced
models with comparable accuracy to our best previously published "in-house"
efforts. A meta-ensemble model constructed as a linear combination of the top
predictions has a prediction accuracy which exceeds that of any individual
model, 7-19x better than our previous state-of-the-art. The results highlight
the potential of transformer architectures for predicting quantum mechanical
(QM) molecular properties
Distributed deep learning networks among institutions for medical imaging
Objective
Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data.
Methods
We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet).
Results
We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer.
Conclusions
We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study