2,679 research outputs found
On the impossibility of coin-flipping in generalized probabilistic theories via discretizations of semi-infinite programs
Coin-flipping is a fundamental cryptographic task where a spatially separated
Alice and Bob wish to generate a fair coin-flip over a communication channel.
It is known that ideal coin-flipping is impossible in both classical and
quantum theory. In this work, we give a short proof that it is also impossible
in generalized probabilistic theories under the Generalized No-Restriction
Hypothesis. Our proof relies crucially on a formulation of cheating strategies
as semi-infinite programs, i.e., cone programs with infinitely many
constraints. This introduces a new formalism which may be of independent
interest to the quantum community
How to make unforgeable money in generalised probabilistic theories
We discuss the possibility of creating money that is physically impossible to
counterfeit. Of course, "physically impossible" is dependent on the theory that
is a faithful description of nature. Currently there are several proposals for
quantum money which have their security based on the validity of quantum
mechanics. In this work, we examine Wiesner's money scheme in the framework of
generalised probabilistic theories. This framework is broad enough to allow for
essentially any potential theory of nature, provided that it admits an
operational description. We prove that under a quantifiable version of the
no-cloning theorem, one can create physical money which has an exponentially
small chance of being counterfeited. Our proof relies on cone programming, a
natural generalisation of semidefinite programming. Moreover, we discuss some
of the difficulties that arise when considering non-quantum theories.Comment: 27 pages, many diagrams. Comments welcom
Deriving Grover's lower bound from simple physical principles
Grover's algorithm constitutes the optimal quantum solution to the search
problem and provides a quadratic speed-up over all possible classical search
algorithms. Quantum interference between computational paths has been posited
as a key resource behind this computational speed-up. However there is a limit
to this interference, at most pairs of paths can ever interact in a fundamental
way. Could more interference imply more computational power? Sorkin has defined
a hierarchy of possible interference behaviours---currently under experimental
investigation---where classical theory is at the first level of the hierarchy
and quantum theory belongs to the second. Informally, the order in the
hierarchy corresponds to the number of paths that have an irreducible
interaction in a multi-slit experiment. In this work, we consider how Grover's
speed-up depends on the order of interference in a theory. Surprisingly, we
show that the quadratic lower bound holds regardless of the order of
interference. Thus, at least from the point of view of the search problem,
post-quantum interference does not imply a computational speed-up over quantum
theory.Comment: Updated title and exposition in response to referee comments. 6+2
pages, 5 figure
Higher-order interference in extensions of quantum theory
Quantum interference lies at the heart of several quantum computational
speed-ups and provides a striking example of a phenomenon with no classical
counterpart. An intriguing feature of quantum interference arises in a three
slit experiment. In this set-up, the interference pattern can be written in
terms of the two and one slit patterns obtained by blocking some of the slits.
This is in stark contrast with the standard two slit experiment, where the
interference pattern is irreducible. This was first noted by Rafael Sorkin, who
asked why quantum theory only exhibits irreducible interference in the two slit
experiment. One approach to this problem is to compare the predictions of
quantum theory to those of operationally-defined `foil' theories, in the hope
of determining whether theories exhibiting higher-order interference suffer
from pathological--or at least undesirable--features. In this paper two
proposed extensions of quantum theory are considered: the theory of Density
Cubes proposed by Dakic et al., which has been shown to exhibit irreducible
interference in the three slit set-up, and the Quartic Quantum Theory of
Zyczkowski. The theory of Density Cubes will be shown to provide an advantage
over quantum theory in a certain computational task and to posses a
well-defined mechanism which leads to the emergence of quantum theory. Despite
this, the axioms used to define Density Cubes will be shown to be insufficient
to uniquely characterise the theory. In comparison, Quartic Quantum Theory is
well-defined and we show that it exhibits irreducible interference to all
orders. This feature of the theory is argued not to be a genuine phenomenon,
but to arise from an ambiguity in the current definition of higher-order
interference. To understand why quantum theory has limited interference
therefore, a new operational definition of higher-order interference is needed.Comment: Updated in response to referee comments. 17 pages. Comments welcom
Oracles and query lower bounds in generalised probabilistic theories
We investigate the connection between interference and computational power
within the operationally defined framework of generalised probabilistic
theories. To compare the computational abilities of different theories within
this framework we show that any theory satisfying three natural physical
principles possess a well-defined oracle model. Indeed, we prove a subroutine
theorem for oracles in such theories which is a necessary condition for the
oracle to be well-defined. The three principles are: causality (roughly, no
signalling from the future), purification (each mixed state arises as the
marginal of a pure state of a larger system), and strong symmetry existence of
non-trivial reversible transformations). Sorkin has defined a hierarchy of
conceivable interference behaviours, where the order in the hierarchy
corresponds to the number of paths that have an irreducible interaction in a
multi-slit experiment. Given our oracle model, we show that if a classical
computer requires at least n queries to solve a learning problem, then the
corresponding lower bound in theories lying at the kth level of Sorkin's
hierarchy is n/k. Hence, lower bounds on the number of queries to a quantum
oracle needed to solve certain problems are not optimal in the space of all
generalised probabilistic theories, although it is not yet known whether the
optimal bounds are achievable in general. Hence searches for higher-order
interference are not only foundationally motivated, but constitute a search for
a computational resource beyond that offered by quantum computation.Comment: 17+7 pages. Comments Welcome. Published in special issue
"Foundational Aspects of Quantum Information" in Foundations of Physic
Past and Ongoing Tsetse and Animal Trypanosomiasis Control Operations in Five African Countries: A Systematic Review
Background
Control operations targeting Animal African Trypanosomiasis and its primary vector, the tsetse, were covering approximately 128,000 km2 of Africa in 2001, which is a mere 1.3% of the tsetse infested area. Although extensive trypanosomiasis and tsetse (T&T) control operations have been running since the beginning of the 20th century, Animal African Trypanosomiasis is still a major constraint of livestock production in sub-Saharan Africa.
Methodology/Principal Findings
We performed a systematic review of the existing literature describing T&T control programmes conducted in a selection of five African countries, namely Burkina Faso, Cameroon, Ethiopia, Uganda and Zambia, between 1980 and 2015. Sixty-eight documents were eventually selected from those identified by the database search. This was supplemented with information gathered through semi-structured interviews conducted with twelve key informants recruited in the study countries and selected based on their experience and knowledge of T&T control. The combined information from these two sources was used to describe the inputs, processes and outcomes from 23 major T&T control programmes implemented in the study countries. Although there were some data gaps, involvement of the target communities and sustainability of the control activities were identified as the two main issues faced by these programmes. Further, there was a lack of evaluation of these control programmes, as well as a lack of a standardised methodology to conduct such evaluations.
Conclusions/Significance
Past experiences demonstrated that coordinated and sustained control activities require careful planning, and evidence of successes, failures and setbacks from past control programmes represent a mine of information. As there is a lack of evaluation of these programmes, these data have not been fully exploited for the design, analyses and justification of future control programmes
- …