17,080 research outputs found
Hide-and-Seek with Directional Sensing
We consider a game played between a hider, who hides a static object in one
of several possible positions in a bounded planar region, and a searcher, who
wishes to reach the object by querying sensors placed in the plane. The
searcher is a mobile agent, and whenever it physically visits a sensor, the
sensor returns a random direction, corresponding to a half-plane in which the
hidden object is located. We first present a novel search heuristic and
characterize bounds on the expected distance covered before reaching the
object. Next, we model this game as a large-dimensional zero-sum dynamic game
and we apply a recently introduced randomized sampling technique that provides
a probabilistic level of security to the hider. We observe that, when the
randomized sampling approach is only allowed to select a very small number of
samples, the cost of the heuristic is comparable to the security level provided
by the randomized procedure. However, as we allow the number of samples to
increase, the randomized procedure provides a higher probabilistic security
level.Comment: A short version of this paper (without proofs) will be presented at
the 18th IFAC World Congress (IFAC 2011), Milan (Italy), August 28-September
2, 201
Efficient discrete-time simulations of continuous-time quantum query algorithms
The continuous-time query model is a variant of the discrete query model in
which queries can be interleaved with known operations (called "driving
operations") continuously in time. Interesting algorithms have been discovered
in this model, such as an algorithm for evaluating nand trees more efficiently
than any classical algorithm. Subsequent work has shown that there also exists
an efficient algorithm for nand trees in the discrete query model; however,
there is no efficient conversion known for continuous-time query algorithms for
arbitrary problems.
We show that any quantum algorithm in the continuous-time query model whose
total query time is T can be simulated by a quantum algorithm in the discrete
query model that makes O[T log(T) / log(log(T))] queries. This is the first
upper bound that is independent of the driving operations (i.e., it holds even
if the norm of the driving Hamiltonian is very large). A corollary is that any
lower bound of T queries for a problem in the discrete-time query model
immediately carries over to a lower bound of \Omega[T log(log(T))/log (T)] in
the continuous-time query model.Comment: 12 pages, 6 fig
Great SCO2T! Rapid tool for carbon sequestration science, engineering, and economics
CO2 capture and storage (CCS) technology is likely to be widely deployed in
coming decades in response to major climate and economics drivers: CCS is part
of every clean energy pathway that limits global warming to 2C or less and
receives significant CO2 tax credits in the United States. These drivers are
likely to stimulate capture, transport, and storage of hundreds of millions or
billions of tonnes of CO2 annually. A key part of the CCS puzzle will be
identifying and characterizing suitable storage sites for vast amounts of CO2.
We introduce a new software tool called SCO2T (Sequestration of CO2 Tool,
pronounced "Scott") to rapidly characterizing saline storage reservoirs. The
tool is designed to rapidly screen hundreds of thousands of reservoirs, perform
sensitivity and uncertainty analyses, and link sequestration engineering
(injection rates, reservoir capacities, plume dimensions) to sequestration
economics (costs constructed from around 70 separate economic inputs). We
describe the novel science developments supporting SCO2T including a new
approach to estimating CO2 injection rates and CO2 plume dimensions as well as
key advances linking sequestration engineering with economics. Next, we perform
a sensitivity and uncertainty analysis of geology combinations (including
formation depth, thickness, permeability, porosity, and temperature) to
understand the impact on carbon sequestration. Through the sensitivity analysis
we show that increasing depth and permeability both can lead to increased CO2
injection rates, increased storage potential, and reduced costs, while
increasing porosity reduces costs without impacting the injection rate (CO2 is
injected at a constant pressure in all cases) by increasing the reservoir
capacity.Comment: CO2 capture and storage; carbon sequestration; reduced-order
modeling; climate change; economic
Postselection threshold against biased noise
The highest current estimates for the amount of noise a quantum computer can
tolerate are based on fault-tolerance schemes relying heavily on postselecting
on no detected errors. However, there has been no proof that these schemes give
even a positive tolerable noise threshold. A technique to prove a positive
threshold, for probabilistic noise models, is presented. The main idea is to
maintain strong control over the distribution of errors in the quantum state at
all times. This distribution has correlations which conceivably could grow out
of control with postselection. But in fact, the error distribution can be
written as a mixture of nearby distributions each satisfying strong
independence properties, so there are no correlations for postselection to
amplify.Comment: 13 pages, FOCS 2006; conference versio
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Probabilistic simulation for the certification of railway vehicles
The present dynamic certification process that is based on experiments has been essentially built on the basis of experience. The introduction of simulation techniques into this process would be of great interest. However, an accurate simulation of complex, nonlinear systems is a difficult task, in particular when rare events (for example, unstable behaviour) are considered. After analysing the system and the currently utilized procedure, this paper proposes a method to achieve, in some particular cases, a simulation-based certification. It focuses on the need for precise and representative excitations (running conditions) and on their variable nature. A probabilistic approach is therefore proposed and illustrated using an example.
First, this paper presents a short description of the vehicle / track system and of the experimental procedure. The proposed simulation process is then described. The requirement to analyse a set of running conditions that is at least as large as the one tested experimentally is explained. In the third section, a sensitivity analysis to determine the most influential parameters of the system is reported. Finally, the proposed method is summarized and an application is presented
- …