2,658 research outputs found
On Some Integrated Approaches to Inference
We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings
Strong converse exponents for the feedback-assisted classical capacity of entanglement-breaking channels
Quantum entanglement can be used in a communication scheme to establish a
correlation between successive channel inputs that is impossible by classical
means. It is known that the classical capacity of quantum channels can be
enhanced by such entangled encoding schemes, but this is not always the case.
In this paper, we prove that a strong converse theorem holds for the classical
capacity of an entanglement-breaking channel even when it is assisted by a
classical feedback link from the receiver to the transmitter. In doing so, we
identify a bound on the strong converse exponent, which determines the
exponentially decaying rate at which the success probability tends to zero, for
a sequence of codes with communication rate exceeding capacity. Proving a
strong converse, along with an achievability theorem, shows that the classical
capacity is a sharp boundary between reliable and unreliable communication
regimes. One of the main tools in our proof is the sandwiched Renyi relative
entropy. The same method of proof is used to derive an exponential bound on the
success probability when communicating over an arbitrary quantum channel
assisted by classical feedback, provided that the transmitter does not use
entangled encoding schemes.Comment: 24 pages, 2 figures, v4: final version accepted for publication in
Problems of Information Transmissio
Entanglement, quantum randomness, and complexity beyond scrambling
Scrambling is a process by which the state of a quantum system is effectively
randomized due to the global entanglement that "hides" initially localized
quantum information. In this work, we lay the mathematical foundations of
studying randomness complexities beyond scrambling by entanglement properties.
We do so by analyzing the generalized (in particular R\'enyi) entanglement
entropies of designs, i.e. ensembles of unitary channels or pure states that
mimic the uniformly random distribution (given by the Haar measure) up to
certain moments. A main collective conclusion is that the R\'enyi entanglement
entropies averaged over designs of the same order are almost maximal. This
links the orders of entropy and design, and therefore suggests R\'enyi
entanglement entropies as diagnostics of the randomness complexity of
corresponding designs. Such complexities form a hierarchy between information
scrambling and Haar randomness. As a strong separation result, we prove the
existence of (state) 2-designs such that the R\'enyi entanglement entropies of
higher orders can be bounded away from the maximum. However, we also show that
the min entanglement entropy is maximized by designs of order only logarithmic
in the dimension of the system. In other words, logarithmic-designs already
achieve the complexity of Haar in terms of entanglement, which we also call
max-scrambling. This result leads to a generalization of the fast scrambling
conjecture, that max-scrambling can be achieved by physical dynamics in time
roughly linear in the number of degrees of freedom.Comment: 72 pages, 4 figures. Rewritten version with new title. v3: published
versio
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Perspectives on Multi-Level Dynamics
As Physics did in previous centuries, there is currently a common dream of
extracting generic laws of nature in economics, sociology, neuroscience, by
focalising the description of phenomena to a minimal set of variables and
parameters, linked together by causal equations of evolution whose structure
may reveal hidden principles. This requires a huge reduction of dimensionality
(number of degrees of freedom) and a change in the level of description. Beyond
the mere necessity of developing accurate techniques affording this reduction,
there is the question of the correspondence between the initial system and the
reduced one. In this paper, we offer a perspective towards a common framework
for discussing and understanding multi-level systems exhibiting structures at
various spatial and temporal levels. We propose a common foundation and
illustrate it with examples from different fields. We also point out the
difficulties in constructing such a general setting and its limitations
- …