19,694 research outputs found
Privacy-preserving machine learning for healthcare: open challenges and future perspectives
Machine Learning (ML) has recently shown tremendous success in modeling
various healthcare prediction tasks, ranging from disease diagnosis and
prognosis to patient treatment. Due to the sensitive nature of medical data,
privacy must be considered along the entire ML pipeline, from model training to
inference. In this paper, we conduct a review of recent literature concerning
Privacy-Preserving Machine Learning (PPML) for healthcare. We primarily focus
on privacy-preserving training and inference-as-a-service, and perform a
comprehensive review of existing trends, identify challenges, and discuss
opportunities for future research directions. The aim of this review is to
guide the development of private and efficient ML models in healthcare, with
the prospects of translating research efforts into real-world settings.Comment: ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare
(TML4H
Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge
We envision a mobile edge computing (MEC) framework for machine learning (ML)
technologies, which leverages distributed client data and computation resources
for training high-performance ML models while preserving client privacy. Toward
this future goal, this work aims to extend Federated Learning (FL), a
decentralized learning framework that enables privacy-preserving training of
models, to work with heterogeneous clients in a practical cellular network. The
FL protocol iteratively asks random clients to download a trainable model from
a server, update it with own data, and upload the updated model to the server,
while asking the server to aggregate multiple client updates to further improve
the model. While clients in this protocol are free from disclosing own private
data, the overall training process can become inefficient when some clients are
with limited computational resources (i.e. requiring longer update time) or
under poor wireless channel conditions (longer upload time). Our new FL
protocol, which we refer to as FedCS, mitigates this problem and performs FL
efficiently while actively managing clients based on their resource conditions.
Specifically, FedCS solves a client selection problem with resource
constraints, which allows the server to aggregate as many client updates as
possible and to accelerate performance improvement in ML models. We conducted
an experimental evaluation using publicly-available large-scale image datasets
to train deep neural networks on MEC environment simulations. The experimental
results show that FedCS is able to complete its training process in a
significantly shorter time compared to the original FL protocol
- …