254 research outputs found
Causal Dependence Tree Approximations of Joint Distributions for Multiple Random Processes
We investigate approximating joint distributions of random processes with
causal dependence tree distributions. Such distributions are particularly
useful in providing parsimonious representation when there exists causal
dynamics among processes. By extending the results by Chow and Liu on
dependence tree approximations, we show that the best causal dependence tree
approximation is the one which maximizes the sum of directed informations on
its edges, where best is defined in terms of minimizing the KL-divergence
between the original and the approximate distribution. Moreover, we describe a
low-complexity algorithm to efficiently pick this approximate distribution.Comment: 9 pages, 15 figure
Artificial Intelligence Approach to the Determination of Physical Properties of Eclipsing Binaries. I. The EBAI Project
Achieving maximum scientific results from the overwhelming volume of
astronomical data to be acquired over the next few decades will demand novel,
fully automatic methods of data analysis. Artificial intelligence approaches
hold great promise in contributing to this goal. Here we apply neural network
learning technology to the specific domain of eclipsing binary (EB) stars, of
which only some hundreds have been rigorously analyzed, but whose numbers will
reach millions in a decade. Well-analyzed EBs are a prime source of
astrophysical information whose growth rate is at present limited by the need
for human interaction with each EB data-set, principally in determining a
starting solution for subsequent rigorous analysis. We describe the artificial
neural network (ANN) approach which is able to surmount this human bottleneck
and permit EB-based astrophysical information to keep pace with future data
rates. The ANN, following training on a sample of 33,235 model light curves,
outputs a set of approximate model parameters (T2/T1, (R1+R2)/a, e sin(omega),
e cos(omega), and sin i) for each input light curve data-set. The whole sample
is processed in just a few seconds on a single 2GHz CPU. The obtained
parameters can then be readily passed to sophisticated modeling engines. We
also describe a novel method polyfit for pre-processing observational light
curves before inputting their data to the ANN and present the results and
analysis of testing the approach on synthetic data and on real data including
fifty binaries from the Catalog and Atlas of Eclipsing Binaries (CALEB)
database and 2580 light curves from OGLE survey data. [abridged]Comment: 52 pages, accepted to Ap
Improving the tolerance of stochastic LDPC decoders to overclocking-induced timing errors: a tutorial and design example
Channel codes such as Low-Density Parity-Check (LDPC) codes may be employed in wireless communication schemes for correcting transmission errors. This tolerance to channel-induced transmission errors allows the communication schemes to achieve higher transmission throughputs, at the cost of requiring additional processing for performing LDPC decoding. However, this LDPC decoding operation is associated with a potentially inadequate processing throughput, which may constrain the attainable transmission throughput. In order to increase the processing throughput, the clock period may be reduced, albeit this is at the cost of potentially introducing timing errors. Previous research efforts have considered a paucity of solutions for mitigating the occurrence of timing errors in channel decoders, by employing additional circuitry for detecting and correcting these overclocking-induced timing errors. Against this background, in this paper we demonstrate that stochastic LDPC decoders (LDPC-SDs) are capable of exploiting their inherent error correction capability for correcting not only transmission errors, but also timing errors, even without the requirement for additional circuitry. Motivated by this, we provide the first comprehensive tutorial on LDPC-SDs. We also propose a novel design flow for timing-error-tolerant LDPC decoders. We use this to develop a timing error model for LDPC-SDs and investigate how their overall error correction performance is affected by overclocking. Drawing upon our findings, we propose a modified LDPC-SD, having an improved timing error tolerance. In a particular practical scenario, this modification eliminates the approximately 1 dB performance degradation that is suffered by an overclocked LDPC-SD without our modification, enabling the processing throughput to be increased by up to 69.4%, which is achieved without compromising the error correction capability or processing energy consumption of the LDPC-SD
Hardness results for decoding the surface code with Pauli noise
Real quantum computers will be subject to complicated, qubit-dependent noise,
instead of simple noise such as depolarizing noise with the same strength for
all qubits. We can do quantum error correction more effectively if our decoding
algorithms take into account this prior information about the specific noise
present. This motivates us to consider the complexity of surface code decoding
where the input to the decoding problem is not only the syndrome-measurement
results, but also a noise model in the form of probabilities of single-qubit
Pauli errors for every qubit.
In this setting, we show that Maximum Probability Error (MPE) decoding and
Maximum Likelihood (ML) decoding for the surface code are NP-hard and #P-hard,
respectively. We reduce directly from SAT for MPE decoding, and from #SAT for
ML decoding, by showing how to transform a boolean formula into a
qubit-dependent Pauli noise model and set of syndromes that encode the
satisfiability properties of the formula. We also give hardness of
approximation results for MPE and ML decoding. These are worst-case hardness
results that do not contradict the empirical fact that many efficient surface
code decoders are correct in the average case (i.e., for most sets of syndromes
and for most reasonable noise models). These hardness results are nicely
analogous with the known hardness results for MPE and ML decoding of arbitrary
stabilizer codes with independent and noise.Comment: 37 pages, 18 figures. 26 pages, 12 figures in main tex
Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications
Communication systems to date primarily aim at reliably communicating bit
sequences. Such an approach provides efficient engineering designs that are
agnostic to the meanings of the messages or to the goal that the message
exchange aims to achieve. Next generation systems, however, can be potentially
enriched by folding message semantics and goals of communication into their
design. Further, these systems can be made cognizant of the context in which
communication exchange takes place, providing avenues for novel design
insights. This tutorial summarizes the efforts to date, starting from its early
adaptations, semantic-aware and task-oriented communications, covering the
foundations, algorithms and potential implementations. The focus is on
approaches that utilize information theory to provide the foundations, as well
as the significant role of learning in semantics and task-aware communications.Comment: 28 pages, 14 figure
Simultaneous ranging and self-positioning in unsynchronized wireless acoustic sensor networks
Automatic ranging and self-positioning is a very
desirable property in wireless acoustic sensor networks (WASNs)
where nodes have at least one microphone and one loudspeaker.
However, due to environmental noise, interference and multipath
effects, audio-based ranging is a challenging task. This paper
presents a fast ranging and positioning strategy that makes use
of the correlation properties of pseudo-noise (PN) sequences for
estimating simultaneously relative time-of-arrivals (TOAs) from
multiple acoustic nodes. To this end, a proper test signal design
adapted to the acoustic node transducers is proposed. In addition,
a novel self-interference reduction method and a peak matching
algorithm are introduced, allowing for increased accuracy in
indoor environments. Synchronization issues are removed by
following a BeepBeep strategy, providing range estimates that
are converted to absolute node positions by means of multidimensional
scaling (MDS). The proposed approach is evaluated both
with simulated and real experiments under different acoustical
conditions. The results using a real network of smartphones and
laptops confirm the validity of the proposed approach, reaching
an average ranging accuracy below 1 centimeter.This work was supported by the Spanish Ministry of Economy and Competitiveness under Grant TIN2015-70202-P, TEC2012-37945-C02-02 and FEDER funds
Smoothing of binary codes, uniform distributions, and applications
The action of a noise operator on a code transforms it into a distribution on
the respective space. Some common examples from information theory include
Bernoulli noise acting on a code in the Hamming space and Gaussian noise acting
on a lattice in the Euclidean space. We aim to characterize the cases when the
output distribution is close to the uniform distribution on the space, as
measured by R{\'e}nyi divergence of order . A version of
this question is known as the channel resolvability problem in information
theory, and it has implications for security guarantees in wiretap channels,
error correction, discrepancy, worst-to-average case complexity reductions, and
many other problems.
Our work quantifies the requirements for asymptotic uniformity (perfect
smoothing) and identifies explicit code families that achieve it under the
action of the Bernoulli and ball noise operators on the code. We derive
expressions for the minimum rate of codes required to attain asymptotically
perfect smoothing. In proving our results, we leverage recent results from
harmonic analysis of functions on the Hamming space. Another result pertains to
the use of code families in Wyner's transmission scheme on the binary wiretap
channel. We identify explicit families that guarantee strong secrecy when
applied in this scheme, showing that nested Reed-Muller codes can transmit
messages reliably and securely over a binary symmetric wiretap channel with a
positive rate. Finally, we establish a connection between smoothing and error
correction in the binary symmetric channel
- …