77 research outputs found
Advantages of Unfair Quantum Ground-State Sampling
The debate around the potential superiority of quantum annealers over their
classical counterparts has been ongoing since the inception of the field by
Kadowaki and Nishimori close to two decades ago. Recent technological
breakthroughs in the field, which have led to the manufacture of experimental
prototypes of quantum annealing optimizers with sizes approaching the practical
regime, have reignited this discussion. However, the demonstration of quantum
annealing speedups remains to this day an elusive albeit coveted goal. Here, we
examine the power of quantum annealers to provide a different type of quantum
enhancement of practical relevance, namely, their ability to serve as useful
samplers from the ground-state manifolds of combinatorial optimization
problems. We study, both numerically by simulating ideal stoquastic and
non-stoquastic quantum annealing processes, and experimentally, using a
commercially available quantum annealing processor, the ability of quantum
annealers to sample the ground-states of spin glasses differently than
classical thermal samplers. We demonstrate that i) quantum annealers in general
sample the ground-state manifolds of spin glasses very differently than thermal
optimizers, ii) the nature of the quantum fluctuations driving the annealing
process has a decisive effect on the final distribution over ground-states, and
iii) the experimental quantum annealer samples ground-state manifolds
significantly differently than thermal and ideal quantum annealers. We
illustrate how quantum annealers may serve as powerful tools when complementing
standard sampling algorithms.Comment: 13 pages, 11 figure
Average-case Hardness of RIP Certification
The restricted isometry property (RIP) for design matrices gives guarantees
for optimal recovery in sparse linear models. It is of high interest in
compressed sensing and statistical learning. This property is particularly
important for computationally efficient recovery methods. As a consequence,
even though it is in general NP-hard to check that RIP holds, there have been
substantial efforts to find tractable proxies for it. These would allow the
construction of RIP matrices and the polynomial-time verification of RIP given
an arbitrary matrix. We consider the framework of average-case certifiers, that
never wrongly declare that a matrix is RIP, while being often correct for
random instances. While there are such functions which are tractable in a
suboptimal parameter regime, we show that this is a computationally hard task
in any better regime. Our results are based on a new, weaker assumption on the
problem of detecting dense subgraphs
Computers from plants we never made. Speculations
We discuss possible designs and prototypes of computing systems that could be
based on morphological development of roots, interaction of roots, and analog
electrical computation with plants, and plant-derived electronic components. In
morphological plant processors data are represented by initial configuration of
roots and configurations of sources of attractants and repellents; results of
computation are represented by topology of the roots' network. Computation is
implemented by the roots following gradients of attractants and repellents, as
well as interacting with each other. Problems solvable by plant roots, in
principle, include shortest-path, minimum spanning tree, Voronoi diagram,
-shapes, convex subdivision of concave polygons. Electrical properties
of plants can be modified by loading the plants with functional nanoparticles
or coating parts of plants of conductive polymers. Thus, we are in position to
make living variable resistors, capacitors, operational amplifiers,
multipliers, potentiometers and fixed-function generators. The electrically
modified plants can implement summation, integration with respect to time,
inversion, multiplication, exponentiation, logarithm, division. Mathematical
and engineering problems to be solved can be represented in plant root networks
of resistive or reaction elements. Developments in plant-based computing
architectures will trigger emergence of a unique community of biologists,
electronic engineering and computer scientists working together to produce
living electronic devices which future green computers will be made of.Comment: The chapter will be published in "Inspired by Nature. Computing
inspired by physics, chemistry and biology. Essays presented to Julian Miller
on the occasion of his 60th birthday", Editors: Susan Stepney and Andrew
Adamatzky (Springer, 2017
Spectral methods and computational trade-offs in high-dimensional statistical inference
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.St John's College and Cambridge Overseas Trus
- …