26 research outputs found
Minimum and maximum entropy distributions for binary systems with known means and pairwise correlations
Maximum entropy models are increasingly being used to describe the collective
activity of neural populations with measured mean neural activities and
pairwise correlations, but the full space of probability distributions
consistent with these constraints has not been explored. We provide upper and
lower bounds on the entropy for the {\em minimum} entropy distribution over
arbitrarily large collections of binary units with any fixed set of mean values
and pairwise correlations. We also construct specific low-entropy distributions
for several relevant cases. Surprisingly, the minimum entropy solution has
entropy scaling logarithmically with system size for any set of first- and
second-order statistics consistent with arbitrarily large systems. We further
demonstrate that some sets of these low-order statistics can only be realized
by small systems. Our results show how only small amounts of randomness are
needed to mimic low-order statistical properties of highly entropic
distributions, and we discuss some applications for engineered and biological
information transmission systems.Comment: 34 pages, 7 figure
Convex reconstruction from structured measurements
Convex signal reconstruction is the art of solving ill-posed inverse problems via convex optimization. It is applicable to a great number of problems from engineering, signal analysis, quantum mechanics and many more. The most prominent example is compressed sensing, where one aims at reconstructing sparse vectors from an under-determined set of linear measurements. In many cases, one can prove rigorous performance guarantees for these convex algorithms. The combination of practical importance and theoretical tractability has directed a significant amount of attention to this young field of applied mathematics.
However, rigorous proofs are usually only available for certain "generic cases"---for instance situations, where all measurements are represented by random Gaussian vectors.
The focus of this thesis is to overcome this drawback by devising mathematical proof techniques can be applied to more "structured" measurements. Here, structure can have various meanings. E.g. it could refer to the type of measurements that occur in a given concrete application. Or, more abstractly, structure in the sense that a measurement ensemble is small and exhibits rich geometric features.
The main focus of this thesis is phase retrieval: The problem of inferring phase information from amplitude measurements. This task is ubiquitous in, for instance, in crystallography, astronomy and diffraction imaging.
Throughout this project, a series of increasingly better convex reconstruction guarantees have been established.
On the one hand, we improved results for certain measurement models that mimic typical experimental setups in diffraction imaging. On the other hand, we identified spherical t-designs as a general purpose tool for the derandomization of data recovery schemes. Loosely speaking, a t-design is a finite configuration of vectors that is "evenly distributed" in the sense that it reproduces the first 2t moments of the uniform measure. Such configurations have been studied, for instance, in algebraic combinatorics, coding theory, and quantum information. We have shown that already spherical 4-designs allow for proving close-to-optimal convex reconstruction guarantees for phase retrieval.
The success of this program depends on explicit constructions of spherical t-designs. In this regard, we have studied the design properties of stabilizer states. These are configurations of vectors that feature prominently in quantum information theory. Mathematically, they can be related to objects in discrete symplectic vector spaces---a structure we use heavily. We have shown that these vectors form a spherical 3-design and are, in some sense, close to a spherical 4-design. Putting these efforts together, we establish tight bounds on phase retrieval from stabilizer measurements.
While working on the derandomization of phase retrieval, I obtained a number of results on other convex signal reconstruction problems. These include compressed sensing from anisotropic measurements, non-negative compressed sensing in the presence of noise and identifying improved convex regularizers for low rank matrix reconstruction. Going even further, the mathematical methods I used to tackle ill-posed inverse problems can be applied to a plethora of problems from quantum information theory.
In particular, the causal structure behind Bell inequalities, new ways to compare experiments to fault-tolerance thresholds in quantum error correction, a novel benchmark for quantum state tomography via Bayesian estimation, and the task of distinguishing quantum states
From Classical to Quantum Shannon Theory
The aim of this book is to develop "from the ground up" many of the major,
exciting, pre- and post-millenium developments in the general area of study
known as quantum Shannon theory. As such, we spend a significant amount of time
on quantum mechanics for quantum information theory (Part II), we give a
careful study of the important unit protocols of teleportation, super-dense
coding, and entanglement distribution (Part III), and we develop many of the
tools necessary for understanding information transmission or compression (Part
IV). Parts V and VI are the culmination of this book, where all of the tools
developed come into play for understanding many of the important results in
quantum Shannon theory.Comment: v8: 774 pages, 301 exercises, 81 figures, several corrections; this
draft, pre-publication copy is available under a Creative Commons
Attribution-NonCommercial-ShareAlike license (see
http://creativecommons.org/licenses/by-nc-sa/3.0/), "Quantum Information
Theory, Second Edition" is available for purchase from Cambridge University
Pres
Quantum information theory
Finally, here is a modern, self-contained text on quantum information theory suitable for graduate-level courses. Developing the subject \u27from the ground up\u27 it covers classical results as well as major advances of the past decade. Beginning with an extensive overview of classical information theory suitable for the non-expert, the author then turns his attention to quantum mechanics for quantum information theory, and the important protocols of teleportation, super-dense coding and entanglement distribution. He develops all of the tools necessary for understanding important results in quantum information theory, including capacity theorems for classical, entanglement-assisted, private and quantum communication. The book also covers important recent developments such as superadditivity of private, coherent and Holevo information, and the superactivation of quantum capacity. This book will be warmly welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theorists
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Classical processing algorithms for Quantum Information Security
In this thesis, we investigate how the combination of quantum physics and information theory could deliver solutions at the forefront of information security, and, in particular, we consider two focus applications: randomness extraction as applied to quantum random number generators and classical processing algorithms for quantum key distribution (QKD).
We concentrate on practical applications for such tools.
We detail the implementation of a randomness extractor for a commercial quantum random number generator, and we evaluate its performance based on information theory.
Then, we focus on QKD as applied to a specific experimental scenario, that is, the one of free-space quantum links. Commercial solutions with quantum links operating over optical fibers, in fact, already exist, but suffer from severe infrastructure complexity and cost overheads. Free-space QKD allows for a higher flexibility, for both terrestrial and satellite links, whilst experiencing higher attenuation and noise at the receiver. In this work, its feasibility is investigated and proven in multiple experiments over links of different length, and in various channel conditions. In particular, after a thorough analysis of information reconciliation protocols, we consider finite-key effects as applied to key distillation, and we propose a novel adaptive real-time selection algorithm which, by leveraging the turbulence of the channel as a resource, extends the feasibility of QKD to new noise thresholds.
By using a full-fledged software for classical processing tailored for the considered application scenario, the obtained results are analyzed and validated, showing that quantum information security can be ensured in realistic conditions with free-space quantum links
Integrity and Privacy of Large Data
There has been considerable recent interest in "cloud storage" wherein a user asks a server to store a large file. One issue is whether the user can verify that the server is actually storing the file, and typically a challenge-response protocol is employed to convince the user that the file is indeed being stored correctly. The security of these schemes is phrased in terms of an extractor which will recover the file given any ``proving algorithm'' that has a sufficiently high success probability. This forms the basis of proof-of-retrievability (PoR) and proof-of-data-possession (PDP) systems. The contributions of this thesis in secure cloud storage are as below.
1. We provide a general analytical framework for various known PoR schemes that yields exact reductions that precisely quantify conditions for extraction to succeed as a function of the success probability of a proving algorithm. We apply this analysis to several archetypal schemes. In addition, we provide a new methodology for the analysis of keyed PoR schemes in an unconditionally secure setting, and use it to prove the security of a modified version of a scheme due to Shacham and Waters (ASIACRYPT, 2009) under a slightly restricted attack model, thus providing the first example of a keyed PoR scheme with unconditional security. We also show how classical statistical techniques can be used to evaluate whether the responses of the prover on the storage are accurate enough to permit successful extraction. Finally, we prove a new lower bound on the storage and communication complexity of PoR schemes.
2. We propose a new type of scheme that we term a proof-of-data-observability scheme. Our definition tries to capture the stronger requirement that the server must have an actual copy of M in its memory space while it executes the challenge-response protocol. We give some examples of schemes that satisfy this new security definition. As well, we analyze the efficiency and security of the protocols we present, and we prove some necessary conditions for the existence of these kinds of protocols.
3. We study secure storage on multiple servers. Our contribution in multiple-server PoR systems is twofold. We formalize security definitions for two possible scenarios: (i) when a threshold of servers succeed with high enough probability (worst-case) and (ii) when the average of the success probability of all the servers is above a threshold (average-case). Using coding theory, we show instances of protocols that are secure both in the average-case and the worst-case scenarios