160,669 research outputs found
On the multiresolution structure of Internet traffic traces
Internet traffic on a network link can be modeled as a stochastic process.
After detecting and quantifying the properties of this process, using
statistical tools, a series of mathematical models is developed, culminating in
one that is able to generate ``traffic'' that exhibits --as a key feature-- the
same difference in behavior for different time scales, as observed in real
traffic, and is moreover indistinguishable from real traffic by other
statistical tests as well. Tools inspired from the models are then used to
determine and calibrate the type of activity taking place in each of the time
scales. Surprisingly, the above procedure does not require any detailed
information originating from either the network dynamics, or the decomposition
of the total traffic into its constituent user connections, but rather only the
compliance of these connections to very weak conditions.Comment: 57 pages, color figures. Figures are of low quality due to space
consideration
Commitment and Oblivious Transfer in the Bounded Storage Model with Errors
The bounded storage model restricts the memory of an adversary in a
cryptographic protocol, rather than restricting its computational power, making
information theoretically secure protocols feasible. We present the first
protocols for commitment and oblivious transfer in the bounded storage model
with errors, i.e., the model where the public random sources available to the
two parties are not exactly the same, but instead are only required to have a
small Hamming distance between themselves. Commitment and oblivious transfer
protocols were known previously only for the error-free variant of the bounded
storage model, which is harder to realize
Samplers and Extractors for Unbounded Functions
Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors
Exact Mean Computation in Dynamic Time Warping Spaces
Dynamic time warping constitutes a major tool for analyzing time series. In
particular, computing a mean series of a given sample of series in dynamic time
warping spaces (by minimizing the Fr\'echet function) is a challenging
computational problem, so far solved by several heuristic and inexact
strategies. We spot some inaccuracies in the literature on exact mean
computation in dynamic time warping spaces. Our contributions comprise an exact
dynamic program computing a mean (useful for benchmarking and evaluating known
heuristics). Based on this dynamic program, we empirically study properties
like uniqueness and length of a mean. Moreover, experimental evaluations reveal
substantial deficits of state-of-the-art heuristics in terms of their output
quality. We also give an exact polynomial-time algorithm for the special case
of binary time series
Efficiency of Producing Random Unitary Matrices with Quantum Circuits
We study the scaling of the convergence of several statistical properties of
a recently introduced random unitary circuit ensemble towards their limits
given by the circular unitary ensemble (CUE). Our study includes the full
distribution of the absolute square of a matrix element, moments of that
distribution up to order eight, as well as correlators containing up to 16
matrix elements in a given column of the unitary matrices. Our numerical
scaling analysis shows that all of these quantities can be reproduced
efficiently, with a number of random gates which scales at most as with the number of qubits for a given fixed precision
. This suggests that quantities which require an exponentially large
number of gates are of more complex nature.Comment: 18 pages, 10 figure
S-DIMM+ height characterization of day-time seeing using solar granulation
To evaluate site quality and to develop multi-conjugative adaptive optics
systems for future large solar telescopes, characterization of contributions to
seeing from heights up to at least 12 km above the telescope is needed. We
describe a method for evaluating contributions to seeing from different layers
along the line-of-sight to the Sun. The method is based on Shack Hartmann
wavefront sensor data recorded over a large field-of-view with solar
granulation and uses only measurements of differential image displacements from
individual exposures, such that the measurements are not degraded by residual
tip-tilt errors. We conclude that the proposed method allows good measurements
when Fried's parameter r_0 is larger than about 7.5 cm for the ground layer and
that these measurements should provide valuable information for site selection
and multi-conjugate development for the future European Solar Telescope. A
major limitation is the large field of view presently used for wavefront
sensing, leading to uncomfortably large uncertainties in r_0 at 30 km distance.Comment: Accepted by AA 22/01/2010 (12 pages, 11 figures
Low-frequency noise as a source of dephasing of a qubit
With the growing efforts in isolating solid-state qubits from external
decoherence sources, the material-inherent sources of noise start to play
crucial role. One representative example is electron traps in the device
material or substrate. Electrons can tunnel or hop between a charged and an
empty trap, or between a trap and a gate electrode. A single trap typically
produces telegraph noise and can hence be modeled as a bistable fluctuator.
Since the distribution of hopping rates is exponentially broad, many traps
produce flicker-noise with spectrum close to 1/f. Here we develop a theory of
decoherence of a qubit in the environment consisting of two-state fluctuators,
which experience transitions between their states induced by interaction with
thermal bath. Due to interaction with the qubit the fluctuators produce
1/f-noise in the qubit's eigenfrequency. We calculate the results of qubit
manipulations - free induction and echo signals - in such environment. The main
problem is that in many important cases the relevant random process is both
non-Markovian and non-Gaussian. Consequently the results in general cannot be
represented by pair correlation function of the qubit eigenfrequency
fluctuations. Our calculations are based on analysis of the density matrix of
the qubit using methods developed for stochastic differential equations. The
proper generating functional is then averaged over different fluctuators using
the so-called Holtsmark procedure. The analytical results are compared with
simulations allowing checking accuracy of the averaging procedure and
evaluating mesoscopic fluctuations. The results allow understanding some
observed features of the echo decay in Josephson qubits.Comment: 18 pages, 8 figures, Proc. of NATO/Euresco Conf. "Fundamental
Problems of Mesoscopic Physics: Interactions and Decoherence", Granada,
Spain, Sept.200
- âŠ