925,515 research outputs found
Trading classical and quantum computational resources
We propose examples of a hybrid quantum-classical simulation where a
classical computer assisted by a small quantum processor can efficiently
simulate a larger quantum system. First we consider sparse quantum circuits
such that each qubit participates in O(1) two-qubit gates. It is shown that any
sparse circuit on n+k qubits can be simulated by sparse circuits on n qubits
and a classical processing that takes time . Secondly, we
study Pauli-based computation (PBC) where allowed operations are
non-destructive eigenvalue measurements of n-qubit Pauli operators. The
computation begins by initializing each qubit in the so-called magic state.
This model is known to be equivalent to the universal quantum computer. We show
that any PBC on n+k qubits can be simulated by PBCs on n qubits and a classical
processing that takes time . Finally, we propose a purely
classical algorithm that can simulate a PBC on n qubits in a time where . This improves upon the brute-force simulation
method which takes time . Our algorithm exploits the fact that
n-fold tensor products of magic states admit a low-rank decomposition into
n-qubit stabilizer states.Comment: 14 pages, 4 figure
Discord and quantum computational resources
Discordant states appear in a large number of quantum phenomena and seem to
be a good indicator of divergence from classicality. While there is evidence
that they are essential for a quantum algorithm to have an advantage over a
classical one, their precise role is unclear. We examine the role of discord in
quantum algorithms using the paradigmatic framework of `restricted distributed
quantum gates' and show that manipulating discordant states using local
operations has an associated cost in terms of entanglement and communication
resources. Changing discord reduces the total correlations and reversible
operations on discordant states usually require non-local resources. Discord
alone is, however, not enough to determine the need for entanglement. A more
general type of similar quantities, which we call K-discord, is introduced as a
further constraint on the kinds of operations that can be performed without
entanglement resources.Comment: Closer to published versio
Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees
Algorithms typically come with tunable parameters that have a considerable
impact on the computational resources they consume. Too often, practitioners
must hand-tune the parameters, a tedious and error-prone task. A recent line of
research provides algorithms that return nearly-optimal parameters from within
a finite set. These algorithms can be used when the parameter space is infinite
by providing as input a random sample of parameters. This data-independent
discretization, however, might miss pockets of nearly-optimal parameters: prior
research has presented scenarios where the only viable parameters lie within an
arbitrarily small region. We provide an algorithm that learns a finite set of
promising parameters from within an infinite set. Our algorithm can help
compile a configuration portfolio, or it can be used to select the input to a
configuration algorithm for finite parameter spaces. Our approach applies to
any configuration problem that satisfies a simple yet ubiquitous structure: the
algorithm's performance is a piecewise constant function of its parameters.
Prior research has exhibited this structure in domains from integer programming
to clustering
Computational aeroelasticity challenges and resources
In the past decade, there has been much activity in the development of computational methods for the analysis of unsteady transonic aerodynamics about airfoils and wings. Significant features are illustrated which must be addressed in the treatment of computational transonic unsteady aerodynamics. The flow regimes for an aircraft on a plot of lift coefficient vs. Mach number are indicated. The sequence of events occurring in air combat maneuvers are illustrated. And further features of transonic flutter are illustrated. Also illustrated are several types of aeroelastic response which were encountered and which offer challenges for computational methods. The four cases illustrate problem areas encountered near the boundaries of aircraft envelopes, as operating condition change from high speed, low angle conditions to lower speed, higher angle conditions
Quantifying Resource Use in Computations
It is currently not possible to quantify the resources needed to perform a
computation. As a consequence, it is not possible to reliably evaluate the
hardware resources needed for the application of algorithms or the running of
programs. This is apparent in both computer science, for instance, in
cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A
System versus Environment game formalism is proposed based on Computability
Logic that allows to define a computational work function that describes the
theoretical and physical resources needed to perform any purely algorithmic
computation. Within this formalism, the cost of a computation is defined as the
sum of information storage over the steps of the computation. The size of the
computational device, eg, the action table of a Universal Turing Machine, the
number of transistors in silicon, or the number and complexity of synapses in a
neural net, is explicitly included in the computational cost. The proposed cost
function leads in a natural way to known computational trade-offs and can be
used to estimate the computational capacity of real silicon hardware and neural
nets. The theory is applied to a historical case of 56 bit DES key recovery, as
an example of application to cryptanalysis. Furthermore, the relative
computational capacities of human brain neurons and the C. elegans nervous
system are estimated as an example of application to neural nets.Comment: 26 pages, no figure
Detecting gravitational-wave transients at five sigma: a hierarchical approach
As second-generation gravitational-wave detectors prepare to analyze data at
unprecedented sensitivity, there is great interest in searches for unmodeled
transients, commonly called bursts. Significant effort has yielded a variety of
techniques to identify and characterize such transient signals, and many of
these methods have been applied to produce astrophysical results using data
from first-generation detectors. However, the computational cost of background
estimation remains a challenging problem; it is difficult to claim a 5{\sigma}
detection with reasonable computational resources without paying for efficiency
with reduced sensitivity. We demonstrate a hierarchical approach to
gravitational-wave transient detection, focusing on long-lived signals, which
can be used to detect transients with significance in excess of 5{\sigma} using
modest computational resources. In particular, we show how previously developed
seedless clustering techniques can be applied to large datasets to identify
high-significance candidates without having to trade sensitivity for speed.Comment: 5 pages, 1 figur
Computational Resources to Filter Gravitational Wave Data with P-approximant Templates
The prior knowledge of the gravitational waveform from compact binary systems
makes matched filtering an attractive detection strategy. This detection method
involves the filtering of the detector output with a set of theoretical
waveforms or templates. One of the most important factors in this strategy is
knowing how many templates are needed in order to reduce the loss of possible
signals. In this study we calculate the number of templates and computational
power needed for a one-step search for gravitational waves from inspiralling
binary systems. We build on previous works by firstly expanding the
post-Newtonian waveforms to 2.5-PN order and secondly, for the first time,
calculating the number of templates needed when using P-approximant waveforms.
The analysis is carried out for the four main first-generation interferometers,
LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the
computational cost of generating banks of templates for filtering GW data. We
carry out the calculations for two initial conditions. In the first case we
assume a minimum individual mass of and in the second, we assume
a minimum individual mass of . We find that, in general, we need
more P-approximant templates to carry out a search than if we use standard PN
templates. This increase varies according to the order of PN-approximation, but
can be as high as a factor of 3 and is explained by the smaller span of the
P-approximant templates as we go to higher masses. The promising outcome is
that for 2-PN templates the increase is small and is outweighed by the known
robustness of the 2-PN P-approximant templates.Comment: 17 pages, 8 figures, Submitted to Class.Quant.Gra
- …
