7,820 research outputs found
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
A new amortized variance-reduced gradient (AVRG) algorithm was developed in
\cite{ying2017convergence}, which has constant storage requirement in
comparison to SAGA and balanced gradient computations in comparison to SVRG.
One key advantage of the AVRG strategy is its amenability to decentralized
implementations. In this work, we show how AVRG can be extended to the network
case where multiple learning agents are assumed to be connected by a graph
topology. In this scenario, each agent observes data that is spatially
distributed and all agents are only allowed to communicate with direct
neighbors. Moreover, the amount of data observed by the individual agents may
differ drastically. For such situations, the balanced gradient computation
property of AVRG becomes a real advantage in reducing idle time caused by
unbalanced local data storage requirements, which is characteristic of other
reduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is
shown to have linear convergence to the exact solution, and is much more memory
efficient than other alternative algorithms. In addition, we propose a
mini-batch strategy to balance the communication and computation efficiency for
diffusion-AVRG. When a proper batch size is employed, it is observed in
simulations that diffusion-AVRG is more computationally efficient than exact
diffusion or EXTRA while maintaining almost the same communication efficiency.Comment: 23 pages, 12 figures, submitted for publicatio
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
A federated learning framework for the next-generation machine learning systems
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (especialização em Sistemas Embebidos e Computadores)The end of Moore's Law aligned with rising concerns about data privacy is forcing machine learning
(ML) to shift from the cloud to the deep edge, near to the data source. In the next-generation ML systems,
the inference and part of the training process will be performed right on the edge, while the cloud will be
responsible for major ML model updates. This new computing paradigm, referred to by academia and
industry researchers as federated learning, alleviates the cloud and network infrastructure while
increasing data privacy. Recent advances have made it possible to efficiently execute the inference pass
of quantized artificial neural networks on Arm Cortex-M and RISC-V (RV32IMCXpulp) microcontroller units
(MCUs). Nevertheless, the training is still confined to the cloud, imposing the transaction of high volumes
of private data over a network.
To tackle this issue, this MSc thesis makes the first attempt to run a decentralized training in Arm
Cortex-M MCUs. To port part of the training process to the deep edge is proposed L-SGD, a lightweight
version of the stochastic gradient descent optimized for maximum speed and minimal memory footprint
on Arm Cortex-M MCUs. The L-SGD is 16.35x faster than the TensorFlow solution while registering a
memory footprint reduction of 13.72%. This comes at the cost of a negligible accuracy drop of only 0.12%.
To merge local model updates returned by edge devices this MSc thesis proposes R-FedAvg, an
implementation of the FedAvg algorithm that reduces the impact of faulty model updates returned by
malicious devices.O fim da Lei de Moore aliado às crescentes preocupações sobre a privacidade dos dados gerou a
necessidade de migrar as aplicações de Machine Learning (ML) da cloud para o edge, perto da fonte de
dados. Na próxima geração de sistemas ML, a inferência e parte do processo de treino será realizada
diretamente no edge, enquanto que a cloud será responsável pelas principais atualizações do modelo
ML. Este novo paradigma informático, referido pelos investigadores académicos e industriais como treino
federativo, diminui a sobrecarga na cloud e na infraestrutura de rede, ao mesmo tempo que aumenta a
privacidade dos dados. Avanços recentes tornaram possível a execução eficiente do processo de
inferência de redes neurais artificiais quantificadas em microcontroladores Arm Cortex-M e RISC-V
(RV32IMCXpulp). No entanto, o processo de treino continua confinado à cloud, impondo a transação de
grandes volumes de dados privados sobre uma rede.
Para abordar esta questão, esta dissertação faz a primeira tentativa de realizar um treino
descentralizado em microcontroladores Arm Cortex-M. Para migrar parte do processo de treino para o
edge é proposto o L-SGD, uma versão lightweight do tradicional método stochastic gradient descent
(SGD), otimizada para uma redução de latência do processo de treino e uma redução de recursos de
memória nos microcontroladores Arm Cortex-M. O L-SGD é 16,35x mais rápido do que a solução
disponibilizada pelo TensorFlow, ao mesmo tempo que regista uma redução de utilização de memória
de 13,72%. O custo desta abordagem é desprezível, sendo a perda de accuracy do modelo de apenas
0,12%. Para fundir atualizações de modelos locais devolvidas por dispositivos do edge, é proposto o RFedAvg, uma implementação do algoritmo FedAvg que reduz o impacto de atualizações de modelos não
contributivos devolvidos por dispositivos maliciosos
- …