6 research outputs found
FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices using a Computing Power Aware Scheduler
Cross-silo federated learning offers a promising solution to collaboratively
train robust and generalized AI models without compromising the privacy of
local datasets, e.g., healthcare, financial, as well as scientific projects
that lack a centralized data facility. Nonetheless, because of the disparity of
computing resources among different clients (i.e., device heterogeneity),
synchronous federated learning algorithms suffer from degraded efficiency when
waiting for straggler clients. Similarly, asynchronous federated learning
algorithms experience degradation in the convergence rate and final model
accuracy on non-identically and independently distributed (non-IID)
heterogeneous datasets due to stale local models and client drift. To address
these limitations in cross-silo federated learning with heterogeneous clients
and data, we propose FedCompass, an innovative semi-asynchronous federated
learning algorithm with a computing power aware scheduler on the server side,
which adaptively assigns varying amounts of training tasks to different clients
using the knowledge of the computing power of individual clients. FedCompass
ensures that multiple locally trained models from clients are received almost
simultaneously as a group for aggregation, effectively reducing the staleness
of local models. At the same time, the overall training process remains
asynchronous, eliminating prolonged waiting periods from straggler clients.
Using diverse non-IID heterogeneous distributed datasets, we demonstrate that
FedCompass achieves faster convergence and higher accuracy than other
asynchronous algorithms while remaining more efficient than synchronous
algorithms when performing federated learning on heterogeneous clients
FAIR principles for AI models, with a practical application for accelerated high energy diffraction microscopy
A concise and measurable set of FAIR (Findable, Accessible, Interoperable and
Reusable) principles for scientific data is transforming the state-of-practice
for data management and stewardship, supporting and enabling discovery and
innovation. Learning from this initiative, and acknowledging the impact of
artificial intelligence (AI) in the practice of science and engineering, we
introduce a set of practical, concise, and measurable FAIR principles for AI
models. We showcase how to create and share FAIR data and AI models within a
unified computational framework combining the following elements: the Advanced
Photon Source at Argonne National Laboratory, the Materials Data Facility, the
Data and Learning Hub for Science, and funcX, and the Argonne Leadership
Computing Facility (ALCF), in particular the ThetaGPU supercomputer and the
SambaNova DataScale system at the ALCF AI Testbed. We describe how this
domain-agnostic computational framework may be harnessed to enable autonomous
AI-driven discovery.Comment: 10 pages, 3 figures. Comments welcome
APPFLx: Providing Privacy-Preserving Cross-Silo Federated Learning as a Service
Cross-silo privacy-preserving federated learning (PPFL) is a powerful tool to
collaboratively train robust and generalized machine learning (ML) models
without sharing sensitive (e.g., healthcare of financial) local data. To ease
and accelerate the adoption of PPFL, we introduce APPFLx, a ready-to-use
platform that provides privacy-preserving cross-silo federated learning as a
service. APPFLx employs Globus authentication to allow users to easily and
securely invite trustworthy collaborators for PPFL, implements several
synchronous and asynchronous FL algorithms, streamlines the FL experiment
launch process, and enables tracking and visualizing the life cycle of FL
experiments, allowing domain experts and ML practitioners to easily orchestrate
and evaluate cross-silo FL under one platform. APPFLx is available online at
https://appflx.lin
Recommended from our members
FAIR principles for AI models with a practical application for accelerated high energy diffraction microscopy
A concise and measurable set of FAIR (Findable, Accessible, Interoperable and Reusable) principles for scientific data is transforming the state-of-practice for data management and stewardship, supporting and enabling discovery and innovation. Learning from this initiative, and acknowledging the impact of artificial intelligence (AI) in the practice of science and engineering, we introduce a set of practical, concise, and measurable FAIR principles for AI models. We showcase how to create and share FAIR data and AI models within a unified computational framework combining the following elements: the Advanced Photon Source at Argonne National Laboratory, the Materials Data Facility, the Data and Learning Hub for Science, and funcX, and the Argonne Leadership Computing Facility (ALCF), in particular the ThetaGPU supercomputer and the SambaNova DataScale® system at the ALCF AI Testbed. We describe how this domain-agnostic computational framework may be harnessed to enable autonomous AI-driven discovery
Book of Abstracts of the 2nd International Conference on Applied Mathematics and Computational Sciences (ICAMCS-2022)
It is a great privilege for us to present the abstract book of ICAMCS-2022 to the authors and the delegates of the event. We hope that you will find it useful, valuable, aspiring, and inspiring. This book is a record of abstracts of the keynote talks, invited talks, and papers presented by the participants, which indicates the progress and state of development in research at the time of writing the research article. It is an invaluable asset to all researchers. The book provides a permanent record of this asset.
Conference Title: 2nd International Conference on Applied Mathematics and Computational SciencesConference Acronym: ICAMCS-2022Conference Date: 12-14 October 2022Conference Organizers:Â DIT University, Dehradun, IndiaConference Mode: Online (Virtual