7,572 research outputs found
Subsampling Algorithms for Semidefinite Programming
We derive a stochastic gradient algorithm for semidefinite optimization using
randomization techniques. The algorithm uses subsampling to reduce the
computational cost of each iteration and the subsampling ratio explicitly
controls granularity, i.e. the tradeoff between cost per iteration and total
number of iterations. Furthermore, the total computational cost is directly
proportional to the complexity (i.e. rank) of the solution. We study numerical
performance on some large-scale problems arising in statistical learning.Comment: Final version, to appear in Stochastic System
Fast Quantum Methods for Optimization
Discrete combinatorial optimization consists in finding the optimal
configuration that minimizes a given discrete objective function. An
interpretation of such a function as the energy of a classical system allows us
to reduce the optimization problem into the preparation of a low-temperature
thermal state of the system. Motivated by the quantum annealing method, we
present three strategies to prepare the low-temperature state that exploit
quantum mechanics in remarkable ways. We focus on implementations without
uncontrolled errors induced by the environment. This allows us to rigorously
prove a quantum advantage. The first strategy uses a classical-to-quantum
mapping, where the equilibrium properties of a classical system in spatial
dimensions can be determined from the ground state properties of a quantum
system also in spatial dimensions. We show how such a ground state can be
prepared by means of quantum annealing, including quantum adiabatic evolutions.
This mapping also allows us to unveil some fundamental relations between
simulated and quantum annealing. The second strategy builds upon the first one
and introduces a technique called spectral gap amplification to reduce the time
required to prepare the same quantum state adiabatically. If implemented on a
quantum device that exploits quantum coherence, this strategy leads to a
quadratic improvement in complexity over the well-known bound of the classical
simulated annealing method. The third strategy is not purely adiabatic;
instead, it exploits diabatic processes between the low-energy states of the
corresponding quantum system. For some problems it results in an exponential
speedup (in the oracle model) over the best classical algorithms.Comment: 15 pages (2 figures
A convergence acceleration operator for multiobjective optimisation
A novel multiobjective optimisation accelerator is
introduced that uses direct manipulation in objective space
together with neural network mappings from objective space to decision space. This operator is a portable component that can be hybridized with any multiobjective optimisation algorithm. The purpose of this Convergence Acceleration Operator (CAO) is to enhance the search capability and the speed of convergence of the host algorithm. The operator acts directly in objective space to suggest improvements to solutions obtained by a multiobjective evolutionary algorithm (MOEA). These suggested improved objective vectors are then mapped into decision variable space and tested. The CAO is incorporated with two leading MOEAs, the Non-Dominated Sorting Genetic Algorithm (NSGA-II) and the Strength Pareto Evolutionary Algorithm (SPEA2) and tested. Results show that the hybridized algorithms consistently improve the speed of convergence of the original algorithm whilst maintaining the desired distribution of solutions
Ergodic Randomized Algorithms and Dynamics over Networks
Algorithms and dynamics over networks often involve randomization, and
randomization may result in oscillating dynamics which fail to converge in a
deterministic sense. In this paper, we observe this undesired feature in three
applications, in which the dynamics is the randomized asynchronous counterpart
of a well-behaved synchronous one. These three applications are network
localization, PageRank computation, and opinion dynamics. Motivated by their
formal similarity, we show the following general fact, under the assumptions of
independence across time and linearities of the updates: if the expected
dynamics is stable and converges to the same limit of the original synchronous
dynamics, then the oscillations are ergodic and the desired limit can be
locally recovered via time-averaging.Comment: 11 pages; submitted for publication. revised version with fixed
technical flaw and updated reference
Simple Local Computation Algorithms for the General Lovasz Local Lemma
We consider the task of designing Local Computation Algorithms (LCA) for
applications of the Lov\'{a}sz Local Lemma (LLL). LCA is a class of sublinear
algorithms proposed by Rubinfeld et al.~\cite{Ronitt} that have received a lot
of attention in recent years. The LLL is an existential, sufficient condition
for a collection of sets to have non-empty intersection (in applications,
often, each set comprises all objects having a certain property). The
ground-breaking algorithm of Moser and Tardos~\cite{MT} made the LLL fully
constructive, following earlier results by Beck~\cite{beck_lll} and
Alon~\cite{alon_lll} giving algorithms under significantly stronger LLL-like
conditions. LCAs under those stronger conditions were given in~\cite{Ronitt},
where it was asked if the Moser-Tardos algorithm can be used to design LCAs
under the standard LLL condition. The main contribution of this paper is to
answer this question affirmatively. In fact, our techniques yield LCAs for
settings beyond the standard LLL condition
- …