108,384 research outputs found
Boltzmann meets Nash: Energy-efficient routing in optical networks under uncertainty
Motivated by the massive deployment of power-hungry data centers for service
provisioning, we examine the problem of routing in optical networks with the
aim of minimizing traffic-driven power consumption. To tackle this issue,
routing must take into account energy efficiency as well as capacity
considerations; moreover, in rapidly-varying network environments, this must be
accomplished in a real-time, distributed manner that remains robust in the
presence of random disturbances and noise. In view of this, we derive a pricing
scheme whose Nash equilibria coincide with the network's socially optimum
states, and we propose a distributed learning method based on the Boltzmann
distribution of statistical mechanics. Using tools from stochastic calculus, we
show that the resulting Boltzmann routing scheme exhibits remarkable
convergence properties under uncertainty: specifically, the long-term average
of the network's power consumption converges within of its
minimum value in time which is at most ,
irrespective of the fluctuations' magnitude; additionally, if the network
admits a strict, non-mixing optimum state, the algorithm converges to it -
again, no matter the noise level. Our analysis is supplemented by extensive
numerical simulations which show that Boltzmann routing can lead to a
significant decrease in power consumption over basic, shortest-path routing
schemes in realistic network conditions.Comment: 24 pages, 4 figure
Asynchronous Parallel Stochastic Gradient Descent - A Numeric Core for Scalable Distributed Machine Learning Algorithms
The implementation of a vast majority of machine learning (ML) algorithms
boils down to solving a numerical optimization problem. In this context,
Stochastic Gradient Descent (SGD) methods have long proven to provide good
results, both in terms of convergence and accuracy. Recently, several
parallelization approaches have been proposed in order to scale SGD to solve
very large ML problems. At their core, most of these approaches are following a
map-reduce scheme. This paper presents a novel parallel updating algorithm for
SGD, which utilizes the asynchronous single-sided communication paradigm.
Compared to existing methods, Asynchronous Parallel Stochastic Gradient Descent
(ASGD) provides faster (or at least equal) convergence, close to linear scaling
and stable accuracy
- …