650 research outputs found
A federated learning framework for the next-generation machine learning systems
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (especialização em Sistemas Embebidos e Computadores)The end of Moore's Law aligned with rising concerns about data privacy is forcing machine learning
(ML) to shift from the cloud to the deep edge, near to the data source. In the next-generation ML systems,
the inference and part of the training process will be performed right on the edge, while the cloud will be
responsible for major ML model updates. This new computing paradigm, referred to by academia and
industry researchers as federated learning, alleviates the cloud and network infrastructure while
increasing data privacy. Recent advances have made it possible to efficiently execute the inference pass
of quantized artificial neural networks on Arm Cortex-M and RISC-V (RV32IMCXpulp) microcontroller units
(MCUs). Nevertheless, the training is still confined to the cloud, imposing the transaction of high volumes
of private data over a network.
To tackle this issue, this MSc thesis makes the first attempt to run a decentralized training in Arm
Cortex-M MCUs. To port part of the training process to the deep edge is proposed L-SGD, a lightweight
version of the stochastic gradient descent optimized for maximum speed and minimal memory footprint
on Arm Cortex-M MCUs. The L-SGD is 16.35x faster than the TensorFlow solution while registering a
memory footprint reduction of 13.72%. This comes at the cost of a negligible accuracy drop of only 0.12%.
To merge local model updates returned by edge devices this MSc thesis proposes R-FedAvg, an
implementation of the FedAvg algorithm that reduces the impact of faulty model updates returned by
malicious devices.O fim da Lei de Moore aliado às crescentes preocupações sobre a privacidade dos dados gerou a
necessidade de migrar as aplicações de Machine Learning (ML) da cloud para o edge, perto da fonte de
dados. Na próxima geração de sistemas ML, a inferência e parte do processo de treino será realizada
diretamente no edge, enquanto que a cloud será responsável pelas principais atualizações do modelo
ML. Este novo paradigma informático, referido pelos investigadores académicos e industriais como treino
federativo, diminui a sobrecarga na cloud e na infraestrutura de rede, ao mesmo tempo que aumenta a
privacidade dos dados. Avanços recentes tornaram possível a execução eficiente do processo de
inferência de redes neurais artificiais quantificadas em microcontroladores Arm Cortex-M e RISC-V
(RV32IMCXpulp). No entanto, o processo de treino continua confinado à cloud, impondo a transação de
grandes volumes de dados privados sobre uma rede.
Para abordar esta questão, esta dissertação faz a primeira tentativa de realizar um treino
descentralizado em microcontroladores Arm Cortex-M. Para migrar parte do processo de treino para o
edge é proposto o L-SGD, uma versão lightweight do tradicional método stochastic gradient descent
(SGD), otimizada para uma redução de latência do processo de treino e uma redução de recursos de
memória nos microcontroladores Arm Cortex-M. O L-SGD é 16,35x mais rápido do que a solução
disponibilizada pelo TensorFlow, ao mesmo tempo que regista uma redução de utilização de memória
de 13,72%. O custo desta abordagem é desprezível, sendo a perda de accuracy do modelo de apenas
0,12%. Para fundir atualizações de modelos locais devolvidas por dispositivos do edge, é proposto o RFedAvg, uma implementação do algoritmo FedAvg que reduz o impacto de atualizações de modelos não
contributivos devolvidos por dispositivos maliciosos
MUD-PQFed: Towards Malicious User Detection in Privacy-Preserving Quantized Federated Learning
Federated Learning (FL), a distributed machine learning paradigm, has been
adapted to mitigate privacy concerns for customers. Despite their appeal, there
are various inference attacks that can exploit shared-plaintext model updates
to embed traces of customer private information, leading to serious privacy
concerns. To alleviate this privacy issue, cryptographic techniques such as
Secure Multi-Party Computation and Homomorphic Encryption have been used for
privacy-preserving FL. However, such security issues in privacy-preserving FL
are poorly elucidated and underexplored. This work is the first attempt to
elucidate the triviality of performing model corruption attacks on
privacy-preserving FL based on lightweight secret sharing. We consider
scenarios in which model updates are quantized to reduce communication overhead
in this case, where an adversary can simply provide local parameters outside
the legal range to corrupt the model. We then propose the MUD-PQFed protocol,
which can precisely detect malicious clients performing attacks and enforce
fair penalties. By removing the contributions of detected malicious clients,
the global model utility is preserved to be comparable to the baseline global
model without the attack. Extensive experiments validate effectiveness in
maintaining baseline accuracy and detecting malicious clients in a fine-grained
mannerComment: 13 pages,13 figure
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
Traditionally, federated learning (FL) aims to train a single global model
while collaboratively using multiple clients and a server. Two natural
challenges that FL algorithms face are heterogeneity in data across clients and
collaboration of clients with {\em diverse resources}. In this work, we
introduce a \textit{quantized} and \textit{personalized} FL algorithm QuPeD
that facilitates collective (personalized model compression) training via
\textit{knowledge distillation} (KD) among clients who have access to
heterogeneous data and resources. For personalization, we allow clients to
learn \textit{compressed personalized models} with different quantization
parameters and model dimensions/structures. Towards this, first we propose an
algorithm for learning quantized models through a relaxed optimization problem,
where quantization values are also optimized over. When each client
participating in the (federated) learning process has different requirements
for the compressed model (both in model dimension and precision), we formulate
a compressed personalization framework by introducing knowledge distillation
loss for local client objectives collaborating through a global model. We
develop an alternating proximal gradient update for solving this compressed
personalization problem, and analyze its convergence properties. Numerically,
we validate that QuPeD outperforms competing personalized FL methods, FedAvg,
and local training of clients in various heterogeneous settings.Comment: Appeared in NeurIPS2021. arXiv admin note: text overlap with
arXiv:2102.1178
FedVQCS: Federated Learning via Vector Quantized Compressed Sensing
In this paper, a new communication-efficient federated learning (FL)
framework is proposed, inspired by vector quantized compressed sensing. The
basic strategy of the proposed framework is to compress the local model update
at each device by applying dimensionality reduction followed by vector
quantization. Subsequently, the global model update is reconstructed at a
parameter server (PS) by applying a sparse signal recovery algorithm to the
aggregation of the compressed local model updates. By harnessing the benefits
of both dimensionality reduction and vector quantization, the proposed
framework effectively reduces the communication overhead of local update
transmissions. Both the design of the vector quantizer and the key parameters
for the compression are optimized so as to minimize the reconstruction error of
the global model update under the constraint of wireless link capacity. By
considering the reconstruction error, the convergence rate of the proposed
framework is also analyzed for a smooth loss function. Simulation results on
the MNIST and CIFAR-10 datasets demonstrate that the proposed framework
provides more than a 2.5% increase in classification accuracy compared to
state-of-art FL frameworks when the communication overhead of the local model
update transmission is less than 0.1 bit per local model entry
- …