3,236 research outputs found
Recurrence and algorithmic information
In this paper we initiate a somewhat detailed investigation of the
relationships between quantitative recurrence indicators and algorithmic
complexity of orbits in weakly chaotic dynamical systems. We mainly focus on
examples.Comment: 26 pages, no figure
A two-band approach to n phase error corrections with LBTI's PHASECam
PHASECam is the Large Binocular Telescope Interferometer's (LBTI) phase
sensor, a near-infrared camera which is used to measure tip/tilt and phase
variations between the two AO-corrected apertures of the Large Binocular
Telescope (LBT). Tip/tilt and phase sensing are currently performed in the H
(1.65 m) and K (2.2 m) bands at 1 kHz, and the K band phase telemetry
is used to send tip/tilt and Optical Path Difference (OPD) corrections to the
system. However, phase variations outside the range [-, ] are not
sensed, and thus are not fully corrected during closed-loop operation.
PHASECam's phase unwrapping algorithm, which attempts to mitigate this issue,
still occasionally fails in the case of fast, large phase variations. This can
cause a fringe jump, in which case the unwrapped phase will be incorrect by a
wavelength or more. This can currently be manually corrected by the observer,
but this is inefficient. A more reliable and automated solution is desired,
especially as the LBTI begins to commission further modes which require robust,
active phase control, including controlled multi-axial (Fizeau) interferometry
and dual-aperture non-redundant aperture masking interferometry. We present a
multi-wavelength method of fringe jump capture and correction which involves
direct comparison between the K band and currently unused H band phase
telemetry.Comment: 17 pages, 10 figure
When do finite sample effects significantly affect entropy estimates ?
An expression is proposed for determining the error caused on entropy
estimates by finite sample effects. This expression is based on the Ansatz that
the ranked distribution of probabilities tends to follow an empirical Zipf law.Comment: 10 pages, 2 figure
Occam's Quantum Strop: Synchronizing and Compressing Classical Cryptic Processes via a Quantum Channel
A stochastic process's statistical complexity stands out as a fundamental
property: the minimum information required to synchronize one process generator
to another. How much information is required, though, when synchronizing over a
quantum channel? Recent work demonstrated that representing causal similarity
as quantum state-indistinguishability provides a quantum advantage. We
generalize this to synchronization and offer a sequence of constructions that
exploit extended causal structures, finding substantial increase of the quantum
advantage. We demonstrate that maximum compression is determined by the
process's cryptic order---a classical, topological property closely allied to
Markov order, itself a measure of historical dependence. We introduce an
efficient algorithm that computes the quantum advantage and close noting that
the advantage comes at a cost---one trades off prediction for generation
complexity.Comment: 10 pages, 6 figures;
http://csc.ucdavis.edu/~cmg/compmech/pubs/oqs.ht
A "metric" complexity for weakly chaotic systems
We consider the number of Bowen sets which are necessary to cover a large
measure subset of the phase space. This introduce some complexity indicator
characterizing different kind of (weakly) chaotic dynamics. Since in many
systems its value is given by a sort of local entropy, this indicator is quite
simple to be calculated. We give some example of calculation in nontrivial
systems (interval exchanges, piecewise isometries e.g.) and a formula similar
to the Ruelle-Pesin one, relating the complexity indicator to some initial
condition sensitivity indicators playing the role of positive Lyapunov
exponents.Comment: 15 pages, no figures. Articl
Source-Channel Diversity for Parallel Channels
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.Comment: 48 pages, 14 figure
- …