1,307 research outputs found
Quantum random walks with history dependence
We introduce a multi-coin discrete quantum random walk where the amplitude
for a coin flip depends upon previous tosses. Although the corresponding
classical random walk is unbiased, a bias can be introduced into the quantum
walk by varying the history dependence. By mixing the biased random walk with
an unbiased one, the direction of the bias can be reversed leading to a new
quantum version of Parrondo's paradox.Comment: 8 pages, 6 figures, RevTe
Efficiently Extracting Randomness from Imperfect Stochastic Processes
We study the problem of extracting a prescribed number of random bits by
reading the smallest possible number of symbols from non-ideal stochastic
processes. The related interval algorithm proposed by Han and Hoshi has
asymptotically optimal performance; however, it assumes that the distribution
of the input stochastic process is known. The motivation for our work is the
fact that, in practice, sources of randomness have inherent correlations and
are affected by measurement's noise. Namely, it is hard to obtain an accurate
estimation of the distribution. This challenge was addressed by the concepts of
seeded and seedless extractors that can handle general random sources with
unknown distributions. However, known seeded and seedless extractors provide
extraction efficiencies that are substantially smaller than Shannon's entropy
limit. Our main contribution is the design of extractors that have a variable
input-length and a fixed output length, are efficient in the consumption of
symbols from the source, are capable of generating random bits from general
stochastic processes and approach the information theoretic upper bound on
efficiency.Comment: 2 columns, 16 page
Controlling discrete quantum walks: coins and intitial states
In discrete time, coined quantum walks, the coin degrees of freedom offer the
potential for a wider range of controls over the evolution of the walk than are
available in the continuous time quantum walk. This paper explores some of the
possibilities on regular graphs, and also reports periodic behaviour on small
cyclic graphs.Comment: 10 (+epsilon) pages, 10 embedded eps figures, typos corrected,
references added and updated, corresponds to published version (except figs
5-9 optimised for b&w printing here
Randomized protocols for asynchronous consensus
The famous Fischer, Lynch, and Paterson impossibility proof shows that it is
impossible to solve the consensus problem in a natural model of an asynchronous
distributed system if even a single process can fail. Since its publication,
two decades of work on fault-tolerant asynchronous consensus algorithms have
evaded this impossibility result by using extended models that provide (a)
randomization, (b) additional timing assumptions, (c) failure detectors, or (d)
stronger synchronization mechanisms than are available in the basic model.
Concentrating on the first of these approaches, we illustrate the history and
structure of randomized asynchronous consensus protocols by giving detailed
descriptions of several such protocols.Comment: 29 pages; survey paper written for PODC 20th anniversary issue of
Distributed Computin
Low-Cost Learning via Active Data Procurement
We design mechanisms for online procurement of data held by strategic agents
for machine learning tasks. The challenge is to use past data to actively price
future data and give learning guarantees even when an agent's cost for
revealing her data may depend arbitrarily on the data itself. We achieve this
goal by showing how to convert a large class of no-regret algorithms into
online posted-price and learning mechanisms. Our results in a sense parallel
classic sample complexity guarantees, but with the key resource being money
rather than quantity of data: With a budget constraint , we give robust risk
(predictive error) bounds on the order of . Because we use an
active approach, we can often guarantee to do significantly better by
leveraging correlations between costs and data.
Our algorithms and analysis go through a model of no-regret learning with
arriving pairs (cost, data) and a budget constraint of . Our regret bounds
for this model are on the order of and we give lower bounds on the
same order.Comment: Full version of EC 2015 paper. Color recommended for figures but
nonessential. 36 pages, of which 12 appendi
Directionally-unbiased unitary optical devices in discrete-time quantum walks
The optical beam splitter is a widely-used device in photonics-based quantum information processing. Specifically, linear optical networks demand large numbers of beam splitters for unitary matrix realization. This requirement comes from the beam splitter property that a photon cannot go back out of the input ports, which we call âdirectionally-biasedâ. Because of this property, higher dimensional information processing tasks suffer from rapid device resource growth when beam splitters are used in a feed-forward manner. Directionally-unbiased linear-optical devices have been introduced recently to eliminate the directional bias, greatly reducing the numbers of required beam splitters when implementing complicated tasks. Analysis of some originally directional optical devices and basic principles of their conversion into directionally-unbiased systems form the base of this paper. Photonic quantum walk implementations are investigated as a main application of the use of directionally-unbiased systems. Several quantum walk procedures executed on graph networks constructed using directionally-unbiased nodes are discussed. A significant savings in hardware and other required resources when compared with traditional directionally-biased beam-splitter-based optical networks is demonstrated.Accepted manuscriptPublished versio
Testing probability distributions underlying aggregated data
In this paper, we analyze and study a hybrid model for testing and learning
probability distributions. Here, in addition to samples, the testing algorithm
is provided with one of two different types of oracles to the unknown
distribution over . More precisely, we define both the dual and
cumulative dual access models, in which the algorithm can both sample from
and respectively, for any ,
- query the probability mass (query access); or
- get the total mass of , i.e. (cumulative
access)
These two models, by generalizing the previously studied sampling and query
oracle models, allow us to bypass the strong lower bounds established for a
number of problems in these settings, while capturing several interesting
aspects of these problems -- and providing new insight on the limitations of
the models. Finally, we show that while the testing algorithms can be in most
cases strictly more efficient, some tasks remain hard even with this additional
power
- âŠ