1,523 research outputs found
Sky maps without anisotropies in the cosmic microwave background are a better fit to WMAP's uncalibrated time ordered data than the official sky maps
The purpose of this reanalysis of the WMAP uncalibrated time ordered data
(TOD) was two fold. The first was to reassess the reliability of the detection
of the anisotropies in the official WMAP sky maps of the cosmic microwave
background (CMB). The second was to assess the performance of a proposed
criterion in avoiding systematic error in detecting a signal of interest. The
criterion was implemented by testing the null hypothesis that the uncalibrated
TOD was consistent with no anisotropies when WMAP's hourly calibration
parameters were allowed to vary. It was shown independently for all 20 WMAP
channels that sky maps with no anisotropies were a better fit to the TOD than
those from the official analysis. The recently launched Planck satellite should
help sort out this perplexing result.Comment: 11 pages with 1 figure and 2 tables. Extensively rewritten to explain
the research bette
Information-capacity description of spin-chain correlations
Information capacities achievable in the multi-parallel-use scenarios are
employed to characterize the quantum correlations in unmodulated spin chains.
By studying the qubit amplitude damping channel, we calculate the quantum
capacity , the entanglement assisted capacity , and the classical
capacity of a spin chain with ferromagnetic Heisenberg interactions.Comment: 12 pages, 3 figures; typos corrected (to appear in PRA
Emergence of Zipf's Law in the Evolution of Communication
Zipf's law seems to be ubiquitous in human languages and appears to be a
universal property of complex communicating systems. Following the early
proposal made by Zipf concerning the presence of a tension between the efforts
of speaker and hearer in a communication system, we introduce evolution by
means of a variational approach to the problem based on Kullback's Minimum
Discrimination of Information Principle. Therefore, using a formalism fully
embedded in the framework of information theory, we demonstrate that Zipf's law
is the only expected outcome of an evolving, communicative system under a
rigorous definition of the communicative tension described by Zipf.Comment: 7 pages, 2 figure
A Static Optimality Transformation with Applications to Planar Point Location
Over the last decade, there have been several data structures that, given a
planar subdivision and a probability distribution over the plane, provide a way
for answering point location queries that is fine-tuned for the distribution.
All these methods suffer from the requirement that the query distribution must
be known in advance.
We present a new data structure for point location queries in planar
triangulations. Our structure is asymptotically as fast as the optimal
structures, but it requires no prior information about the queries. This is a
2D analogue of the jump from Knuth's optimum binary search trees (discovered in
1971) to the splay trees of Sleator and Tarjan in 1985. While the former need
to know the query distribution, the latter are statically optimal. This means
that we can adapt to the query sequence and achieve the same asymptotic
performance as an optimum static structure, without needing any additional
information.Comment: 13 pages, 1 figure, a preliminary version appeared at SoCG 201
Augmentation of nucleon-nucleus scattering by information entropy
Quantum information entropy is calculated from the nucleon nucleus forward
scattering amplitudes. Using a representative set of nuclei, from He to
Pb, and energies, \,[GeV], we establish a linear
dependence of quantum information entropy as functions of logarithm nuclear
mass and logarithm projectile energy .Comment: 5 pages, 2 figure
Dissipation: The phase-space perspective
We show, through a refinement of the work theorem, that the average
dissipation, upon perturbing a Hamiltonian system arbitrarily far out of
equilibrium in a transition between two canonical equilibrium states, is
exactly given by , where and are the
phase space density of the system measured at the same intermediate but
otherwise arbitrary point in time, for the forward and backward process.
is the relative entropy of versus
. This result also implies general inequalities, which are
significantly more accurate than the second law and include, as a special case,
the celebrated Landauer principle on the dissipation involved in irreversible
computations.Comment: 4 pages, 3 figures (4 figure files), accepted for PR
Linear Complexity Lossy Compressor for Binary Redundant Memoryless Sources
A lossy compression algorithm for binary redundant memoryless sources is
presented. The proposed scheme is based on sparse graph codes. By introducing a
nonlinear function, redundant memoryless sequences can be compressed. We
propose a linear complexity compressor based on the extended belief
propagation, into which an inertia term is heuristically introduced, and show
that it has near-optimal performance for moderate block lengths.Comment: 4 pages, 1 figur
Parallel Recursive State Compression for Free
This paper focuses on reducing memory usage in enumerative model checking,
while maintaining the multi-core scalability obtained in earlier work. We
present a tree-based multi-core compression method, which works by leveraging
sharing among sub-vectors of state vectors.
An algorithmic analysis of both worst-case and optimal compression ratios
shows the potential to compress even large states to a small constant on
average (8 bytes). Our experiments demonstrate that this holds up in practice:
the median compression ratio of 279 measured experiments is within 17% of the
optimum for tree compression, and five times better than the median compression
ratio of SPIN's COLLAPSE compression.
Our algorithms are implemented in the LTSmin tool, and our experiments show
that for model checking, multi-core tree compression pays its own way: it comes
virtually without overhead compared to the fastest hash table-based methods.Comment: 19 page
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Subsequence clustering of multivariate time series is a useful tool for
discovering repeated patterns in temporal data. Once these patterns have been
discovered, seemingly complicated datasets can be interpreted as a temporal
sequence of only a small number of states, or clusters. For example, raw sensor
data from a fitness-tracking application can be expressed as a timeline of a
select few actions (i.e., walking, sitting, running). However, discovering
these patterns is challenging because it requires simultaneous segmentation and
clustering of the time series. Furthermore, interpreting the resulting clusters
is difficult, especially when the data is high-dimensional. Here we propose a
new method of model-based clustering, which we call Toeplitz Inverse
Covariance-based Clustering (TICC). Each cluster in the TICC method is defined
by a correlation network, or Markov random field (MRF), characterizing the
interdependencies between different observations in a typical subsequence of
that cluster. Based on this graphical representation, TICC simultaneously
segments and clusters the time series data. We solve the TICC problem through
alternating minimization, using a variation of the expectation maximization
(EM) algorithm. We derive closed-form solutions to efficiently solve the two
resulting subproblems in a scalable way, through dynamic programming and the
alternating direction method of multipliers (ADMM), respectively. We validate
our approach by comparing TICC to several state-of-the-art baselines in a
series of synthetic experiments, and we then demonstrate on an automobile
sensor dataset how TICC can be used to learn interpretable clusters in
real-world scenarios.Comment: This revised version fixes two small typos in the published versio
- …