671 research outputs found
Flexible resources for quantum metrology
Quantum metrology offers a quadratic advantage over classical approaches to
parameter estimation problems by utilizing entanglement and nonclassicality.
However, the hurdle of actually implementing the necessary quantum probe states
and measurements, which vary drastically for different metrological scenarios,
is usually not taken into account. We show that for a wide range of tasks in
metrology, 2D cluster states (a particular family of states useful for
measurement-based quantum computation) can serve as flexible resources that
allow one to efficiently prepare any required state for sensing, and perform
appropriate (entangled) measurements using only single qubit operations.
Crucially, the overhead in the number of qubits is less than quadratic, thus
preserving the quantum scaling advantage. This is ensured by using a
compression to a logarithmically sized space that contains all relevant
information for sensing. We specifically demonstrate how our method can be used
to obtain optimal scaling for phase and frequency estimation in local
estimation problems, as well as for the Bayesian equivalents with Gaussian
priors of varying widths. Furthermore, we show that in the paradigmatic case of
local phase estimation 1D cluster states are sufficient for optimal state
preparation and measurement.Comment: 9+18 pages, many figure
Applications of Derandomization Theory in Coding
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
Achieving Efficiency in Black Box Simulation of Distribution Tails with Self-structuring Importance Samplers
Motivated by the increasing adoption of models which facilitate greater
automation in risk management and decision-making, this paper presents a novel
Importance Sampling (IS) scheme for measuring distribution tails of objectives
modelled with enabling tools such as feature-based decision rules, mixed
integer linear programs, deep neural networks, etc. Conventional efficient IS
approaches suffer from feasibility and scalability concerns due to the need to
intricately tailor the sampler to the underlying probability distribution and
the objective. This challenge is overcome in the proposed black-box scheme by
automating the selection of an effective IS distribution with a transformation
that implicitly learns and replicates the concentration properties observed in
less rare samples. This novel approach is guided by a large deviations
principle that brings out the phenomenon of self-similarity of optimal IS
distributions. The proposed sampler is the first to attain asymptotically
optimal variance reduction across a spectrum of multivariate distributions
despite being oblivious to the underlying structure. The large deviations
principle additionally results in new distribution tail asymptotics capable of
yielding operational insights. The applicability is illustrated by considering
product distribution networks and portfolio credit risk models informed by
neural networks as examples.Comment: 51 page
Recommended from our members
Identifying infection processes with incomplete information
textInfections frequently occur on both networks of devices and networks of people, and can model not only viruses, but also information, rumors, and product use. However, in many circumstances, the infection process itself is hidden, and only the effects, e.g. sickness or knowledge, can be observed. In addition, this information is likely incomplete, missing many sick nodes, as well as inaccurate, with false positives. To use this data effectively, it is often essential to identify the infection process causing the sickness, or even whether the cause is an infection. For our purposes, we consider the susceptible-infected (SI) infection model. We seek to distinguish between infections and random sickness, as well as between different infection (or infection-like) processes in a limited information setting. We formulate this as a hypothesis testing problem, where (typically) in the null, the sickness affects nodes at random, and in the alternative, the infection is spread through the network. Similarly, we consider the case where the sickness may be caused by one of two infection (or infection-like) processes, and we wish to find which is the causative process. We do this is a setting with very limited information, given only a single snapshot of the infection. Only a small portion of the infected population reports the sickness. In addition, there are several other limitations we consider. There may be false positives, obfuscating the infection. Similarly, there may be a random sickness and epidemic process occurring simultaneously. Knowledge of the graph topology may be incomplete, with unknown edges over which the infection may spread. The graph may also be weighted, affecting the way the infection spreads over the graph. In all these cases, we develop algorithms to identify the causative process of the infection utilizing the fact that infected nodes will be clustered. We demonstrate that under reasonable conditions, these algorithms detect an infection with asymptotically zero error probability as the graph size increases.Electrical and Computer Engineerin
An efficient implementation of an implicit FEM scheme for fractional-in-space reaction-diffusion equations
Fractional differential equations are becoming increasingly used as a modelling tool for processes with anomalous diffusion or spatial heterogeneity. However, the presence of a fractional differential operator causes memory (time fractional) or nonlocality (space fractional) issues, which impose a number of computational constraints. In this paper we develop efficient, scalable techniques for solving fractional-in-space reaction diffusion equations using the finite element method on both structured and unstructured grids, and robust techniques for computing the fractional power of a matrix times a vector. Our approach is show-cased by solving the fractional Fisher and fractional Allen-Cahn reaction-diffusion equations in two and three spatial dimensions, and analysing the speed of the travelling wave and size of the interface in terms of the fractional power of the underlying Laplacian operator
Interference Mitigation in Large Random Wireless Networks
A central problem in the operation of large wireless networks is how to deal
with interference -- the unwanted signals being sent by transmitters that a
receiver is not interested in. This thesis looks at ways of combating such
interference.
In Chapters 1 and 2, we outline the necessary information and communication
theory background, including the concept of capacity. We also include an
overview of a new set of schemes for dealing with interference known as
interference alignment, paying special attention to a channel-state-based
strategy called ergodic interference alignment.
In Chapter 3, we consider the operation of large regular and random networks
by treating interference as background noise. We consider the local performance
of a single node, and the global performance of a very large network.
In Chapter 4, we use ergodic interference alignment to derive the asymptotic
sum-capacity of large random dense networks. These networks are derived from a
physical model of node placement where signal strength decays over the distance
between transmitters and receivers. (See also arXiv:1002.0235 and
arXiv:0907.5165.)
In Chapter 5, we look at methods of reducing the long time delays incurred by
ergodic interference alignment. We analyse the tradeoff between reducing delay
and lowering the communication rate. (See also arXiv:1004.0208.)
In Chapter 6, we outline a problem that is equivalent to the problem of
pooled group testing for defective items. We then present some new work that
uses information theoretic techniques to attack group testing. We introduce for
the first time the concept of the group testing channel, which allows for
modelling of a wide range of statistical error models for testing. We derive
new results on the number of tests required to accurately detect defective
items, including when using sequential `adaptive' tests.Comment: PhD thesis, University of Bristol, 201
Universal Privacy Gurantees for Smart Meters
Smart meters (SMs) provide advanced monitoring of consumer energy usage, thereby enabling optimized management and control of electricity distribution systems. Unfortunately, the data collected by SMs can reveal information about consumer activity, such as the times at which they run individual appliances. Two approaches have been proposed to tackle the privacy threat posed by such information leakage. One strategy involves manipulating user data before sending it to the utility provider (UP); this approach improves privacy at the cost of reducing the operational insight provided by the SM data to the UP. The alternative strategy employs rechargeable batteries or local energy sources at each consumer site to try decouple energy usage from energy requests. This thesis investigates the latter approach.
Understanding the privacy implications of any strategy requires an appropriate privacy metric.
A variety of metrics are used to study privacy in energy distribution systems. These include statistical distance metrics, differential privacy, distortion metrics, maximal leakage, maximal -leakage and information measures like mutual information. We here use mutual information to measure privacy both because its well understood fundamental properties and because it provides a useful bridge to adjacent fields such as hypothesis testing, estimation, and statistical or machine learning.
Privacy leakage under mutual information measures has been studied under a variety of assumptions on the energy consumption of the user with a strong focus on i.i.d. and some exploration of markov processes. Since user energy consumption may be non-stationary, here we seek privacy guarantees that apply for general random process models of energy consumption. Moreover, we impose finite capacity bounds on batteries and include the price of the energy requested from the grid, thus minimizing the information leakage subject to a bound on the resulting energy bill. To that aim we model the energy management unit (EMU) as a deterministic finite-state channel, and adapt the Ahlswede-Kaspi coding strategy proposed for permuting channels to the SM privacy setting.
Within this setting, we derive battery policies providing privacy guarantees that hold for any bounded process modelling the energy consumption of the user, including non-ergodic and non-stationary processes. These guarantees are also presented for bounded processes with a known expected average consumption. The optimality of the battery policy is characterized by presenting the probability law of a random process that is tight with respect to the upper bound. Moreover, we derive single letter bounds characterizing the privacy-cost trade off in the presence of variable market price. Finally it is shown that the provided results hold for mutual information, maximal leakage, maximal-alpha leakage and the Arimoto and Sibson channel capacity
- …