3,241 research outputs found
Energy and centrality dependence of particle multiplicity in heavy ion collisions from = 20 to 2760 GeV
The centrality dependence of midrapidity charged-particle multiplicities at a
nucleon-nucleon center-of-mass energy of 2.76 TeV from CMS are compared to
PHOBOS data at 200 and 19.6 GeV. The results are first fitted with a
two-component model which parameterizes the separate contributions of nucleon
participants and nucleon-nucleon collisions. A more direct comparison involves
ratios of multiplicity densities per participant pair between the different
collision energies. The results support and extend earlier indications that the
influences of centrality and collision energy on midrapidity charged-particle
multiplicities are to a large degree independent.Comment: 5 pages, 2 figures, 1 table, Replaced with published version, v3 has
fixed typ
Symmetry-protected dissipative preparation of matrix product states
We propose and analyze a method for efficient dissipative preparation of
matrix product states that exploits their symmetry properties. Specifically, we
construct an explicit protocol that makes use of driven-dissipative dynamics to
prepare the Affleck-Kennedy-Lieb-Tasaki (AKLT) states, which features
symmetry-protected topological order and non-trivial edge excitations. We show
that the use of symmetry allows for robust experimental implementation without
fine-tuned control parameters. Numerical simulations show that the preparation
time scales polynomially in system size . Furthermore, we demonstrate that
this scaling can be improved to by using parallel
preparation of AKLT segments and fusing them via quantum feedback. A concrete
scheme using excitation of trapped neutral atoms into Rydberg state via
Electromagnetically Induced Transparency is proposed, and generalizations to a
broader class of matrix product states are discussed
Error suppression in Hamiltonian based quantum computation using energy penalties
We consider the use of quantum error detecting codes, together with energy
penalties against leaving the codespace, as a method for suppressing
environmentally induced errors in Hamiltonian based quantum computation. This
method was introduced in [1] in the context of quantum adiabatic computation,
but we consider it more generally. Specifically, we consider a computational
Hamiltonian, which has been encoded using the logical qubits of a single-qubit
error detecting code, coupled to an environment of qubits by interaction terms
that act one-locally on the system. Energy penalty terms are added that
penalize states outside of the codespace. We prove that in the limit of
infinitely large penalties, one-local errors are completely suppressed, and we
derive some bounds for the finite penalty case. Our proof technique involves
exact integration of the Schrodinger equation, making no use of master
equations or their assumptions. We perform long time numerical simulations on a
small (one logical qubit) computational system coupled to an environment and
the results suggest that the energy penalty method achieves even greater
protection than our bounds indicate.Comment: 26 pages, 7 figure
Performance and limitations of the QAOA at constant levels on large sparse hypergraphs and spin glass models
The Quantum Approximate Optimization Algorithm (QAOA) is a general purpose
quantum algorithm designed for combinatorial optimization. We analyze its
expected performance and prove concentration properties at any constant level
(number of layers) on ensembles of random combinatorial optimization problems
in the infinite size limit. These ensembles include mixed spin models and
Max--XORSAT on sparse random hypergraphs. To enable our analysis, we prove a
generalization of the multinomial theorem which is a technical result of
independent interest. We then show that the performance of the QAOA at constant
levels for the pure -spin model matches asymptotically the ones for
Max--XORSAT on random sparse Erd\H{o}s-R\'{e}nyi hypergraphs and every
large-girth regular hypergraph. Through this correspondence, we establish that
the average-case value produced by the QAOA at constant levels is bounded away
from optimality for pure -spin models when is even. This limitation
gives a hardness of approximation result for quantum algorithms in a new regime
where the whole graph is seen.Comment: 12+46 page
The Quantum Approximate Optimization Algorithm and the Sherrington-Kirkpatrick Model at Infinite Size
The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose
algorithm for combinatorial optimization problems whose performance can only
improve with the number of layers . While QAOA holds promise as an algorithm
that can be run on near-term quantum computers, its computational power has not
been fully explored. In this work, we study the QAOA applied to the
Sherrington-Kirkpatrick (SK) model, which can be understood as energy
minimization of spins with all-to-all random signed couplings. There is a
recent classical algorithm by Montanari that, assuming a widely believed
conjecture, can be tailored to efficiently find an approximate solution for a
typical instance of the SK model to within times the ground
state energy. We hope to match its performance with the QAOA. Our main result
is a novel technique that allows us to evaluate the typical-instance energy of
the QAOA applied to the SK model. We produce a formula for the expected value
of the energy, as a function of the QAOA parameters, in the infinite size
limit that can be evaluated on a computer with complexity. We
evaluate the formula up to , and find that the QAOA at outperforms
the standard semidefinite programming algorithm. Moreover, we show
concentration: With probability tending to one as , measurements of
the QAOA will produce strings whose energies concentrate at our calculated
value. As an algorithm running on a quantum computer, there is no need to
search for optimal parameters on an instance-by-instance basis since we can
determine them in advance. What we have here is a new framework for analyzing
the QAOA, and our techniques can be of broad interest for evaluating its
performance on more general problems where classical algorithms may fail.Comment: 32 pages, 2 figures, 2 tables; improved presentation of figures and
iterative formul
Bayesian Inference using the Proximal Mapping: Uncertainty Quantification under Varying Dimensionality
In statistical applications, it is common to encounter parameters supported
on a varying or unknown dimensional space. Examples include the fused lasso
regression, the matrix recovery under an unknown low rank, etc. Despite the
ease of obtaining a point estimate via the optimization, it is much more
challenging to quantify their uncertainty -- in the Bayesian framework, a major
difficulty is that if assigning the prior associated with a -dimensional
measure, then there is zero posterior probability on any lower-dimensional
subset with dimension ; to avoid this caveat, one needs to choose another
dimension-selection prior on , which often involves a highly combinatorial
problem. To significantly reduce the modeling burden, we propose a new
generative process for the prior: starting from a continuous random variable
such as multivariate Gaussian, we transform it into a varying-dimensional space
using the proximal mapping.
This leads to a large class of new Bayesian models that can directly exploit
the popular frequentist regularizations and their algorithms, such as the
nuclear norm penalty and the alternating direction method of multipliers, while
providing a principled and probabilistic uncertainty estimation.
We show that this framework is well justified in the geometric measure
theory, and enjoys a convenient posterior computation via the standard
Hamiltonian Monte Carlo. We demonstrate its use in the analysis of the dynamic
flow network data.Comment: 26 pages, 4 figure
- β¦