10,529 research outputs found
Psychophysical identity and free energy
An approach to implementing variational Bayesian inference in biological
systems is considered, under which the thermodynamic free energy of a system
directly encodes its variational free energy. In the case of the brain, this
assumption places constraints on the neuronal encoding of generative and
recognition densities, in particular requiring a stochastic population code.
The resulting relationship between thermodynamic and variational free energies
is prefigured in mind-brain identity theses in philosophy and in the Gestalt
hypothesis of psychophysical isomorphism.Comment: 22 pages; published as a research article on 8/5/2020 in Journal of
the Royal Society Interfac
The unlikely Carnot efficiency
The efficiency of an heat engine is traditionally defined as the ratio of its
average output work over its average input heat. Its highest possible value was
discovered by Carnot in 1824 and is a cornerstone concept in thermodynamics. It
led to the discovery of the second law and to the definition of the Kelvin
temperature scale. Small-scale engines operate in the presence of highly
fluctuating input and output energy fluxes. They are therefore much better
characterized by fluctuating efficiencies. In this study, using the fluctuation
theorem, we identify universal features of efficiency fluctuations. While the
standard thermodynamic efficiency is, as expected, the most likely value, we
find that the Carnot efficiency is, surprisingly, the least likely in the long
time limit. Furthermore the probability distribution for the efficiency assumes
a universal scaling form when operating close-to-equilibrium. We illustrate our
results analytically and numerically on two model systems.Comment: 7 pages, 3 figures, v3: as accepted in Nature Communication
CoCoA: A General Framework for Communication-Efficient Distributed Optimization
The scale of modern datasets necessitates the development of efficient
distributed optimization methods for machine learning. We present a
general-purpose framework for distributed computing environments, CoCoA, that
has an efficient communication scheme and is applicable to a wide variety of
problems in machine learning and signal processing. We extend the framework to
cover general non-strongly-convex regularizers, including L1-regularized
problems like lasso, sparse logistic regression, and elastic net
regularization, and show how earlier work can be derived as a special case. We
provide convergence guarantees for the class of convex regularized loss
minimization objectives, leveraging a novel approach in handling
non-strongly-convex regularizers and non-smooth loss functions. The resulting
framework has markedly improved performance over state-of-the-art methods, as
we illustrate with an extensive set of experiments on real distributed
datasets
A Hybrid Search Algorithm for the Whitehead Minimization Problem
The Whitehead Minimization problem is a problem of finding elements of the
minimal length in the automorphic orbit of a given element of a free group. The
classical algorithm of Whitehead that solves the problem depends exponentially
on the group rank. Moreover, it can be easily shown that exponential blowout
occurs when a word of minimal length has been reached and, therefore, is
inevitable except for some trivial cases.
In this paper we introduce a deterministic Hybrid search algorithm and its
stochastic variation for solving the Whitehead minimization problem. Both
algorithms use search heuristics that allow one to find a length-reducing
automorphism in polynomial time on most inputs and significantly improve the
reduction procedure. The stochastic version of the algorithm employs a
probabilistic system that decides in polynomial time whether or not a word is
minimal. The stochastic algorithm is very robust. It has never happened that a
non-minimal element has been claimed to be minimal
A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
Based on the idea of randomized coordinate descent of -averaged
operators, a randomized primal-dual optimization algorithm is introduced, where
a random subset of coordinates is updated at each iteration. The algorithm
builds upon a variant of a recent (deterministic) algorithm proposed by V\~u
and Condat that includes the well known ADMM as a particular case. The obtained
algorithm is used to solve asynchronously a distributed optimization problem. A
network of agents, each having a separate cost function containing a
differentiable term, seek to find a consensus on the minimum of the aggregate
objective. The method yields an algorithm where at each iteration, a random
subset of agents wake up, update their local estimates, exchange some data with
their neighbors, and go idle. Numerical results demonstrate the attractive
performance of the method. The general approach can be naturally adapted to
other situations where coordinate descent convex optimization algorithms are
used with a random choice of the coordinates.Comment: 10 page
- …