1,800 research outputs found
Mutual Information and Minimum Mean-square Error in Gaussian Channels
This paper deals with arbitrarily distributed finite-power input signals
observed through an additive Gaussian noise channel. It shows a new formula
that connects the input-output mutual information and the minimum mean-square
error (MMSE) achievable by optimal estimation of the input given the output.
That is, the derivative of the mutual information (nats) with respect to the
signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input
statistics. This relationship holds for both scalar and vector signals, as well
as for discrete-time and continuous-time noncausal MMSE estimation. This
fundamental information-theoretic result has an unexpected consequence in
continuous-time nonlinear estimation: For any input signal with finite power,
the causal filtering MMSE achieved at SNR is equal to the average value of the
noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is
chosen uniformly distributed between 0 and SNR
Asynchronous CDMA Systems with Random Spreading-Part II: Design Criteria
Totally asynchronous code-division multiple-access (CDMA) systems are
addressed. In Part I, the fundamental limits of asynchronous CDMA systems are
analyzed in terms of spectral efficiency and SINR at the output of the optimum
linear detector. The focus of Part II is the design of low-complexity
implementations of linear multiuser detectors in systems with many users that
admit a multistage representation, e.g. reduced rank multistage Wiener filters,
polynomial expansion detectors, weighted linear parallel interference
cancellers. The effects of excess bandwidth, chip-pulse shaping, and time delay
distribution on CDMA with suboptimum linear receiver structures are
investigated. Recursive expressions for universal weight design are given. The
performance in terms of SINR is derived in the large-system limit and the
performance improvement over synchronous systems is quantified. The
considerations distinguish between two ways of forming discrete-time
statistics: chip-matched filtering and oversampling
An Introduction to Data Analysis in Asteroseismology
A practical guide is presented to some of the main data analysis concepts and
techniques employed contemporarily in the asteroseismic study of stars
exhibiting solar-like oscillations. The subjects of digital signal processing
and spectral analysis are introduced first. These concern the acquisition of
continuous physical signals to be subsequently digitally analyzed. A number of
specific concepts and techniques relevant to asteroseismology are then
presented as we follow the typical workflow of the data analysis process,
namely, the extraction of global asteroseismic parameters and individual mode
parameters (also known as peak-bagging) from the oscillation spectrum.Comment: Lecture presented at the IVth Azores International Advanced School in
Space Sciences on "Asteroseismology and Exoplanets: Listening to the Stars
and Searching for New Worlds" (arXiv:1709.00645), which took place in Horta,
Azores Islands, Portugal in July 201
Synaptic Transmission: An Information-Theoretic Perspective
Here we analyze synaptic transmission from an information-theoretic
perspective. We derive closed-form expressions for the lower-bounds on the
capacity of a simple model of a cortical synapse under two explicit coding
paradigms. Under the ``signal estimation'' paradigm, we assume the signal to be
encoded in the mean firing rate of a Poisson neuron. The performance of an
optimal linear estimator of the signal then provides a lower bound on the
capacity for signal estimation. Under the ``signal detection'' paradigm, the
presence or absence of the signal has to be detected. Performance of the
optimal spike detector allows us to compute a lower bound on the capacity for
signal detection. We find that single synapses (for empirically measured
parameter values) transmit information poorly but significant improvement can
be achieved with a small amount of redundancy.Comment: 7 pages, 4 figures, NIPS97 proceedings: neuroscience. Originally
submitted to the neuro-sys archive which was never publicly announced (was
9809002
Oversampling Increases the Pre-Log of Noncoherent Rayleigh Fading Channels
We analyze the capacity of a continuous-time, time-selective, Rayleigh
block-fading channel in the high signal-to-noise ratio (SNR) regime. The fading
process is assumed stationary within each block and to change independently
from block to block; furthermore, its realizations are not known a priori to
the transmitter and the receiver (noncoherent setting). A common approach to
analyzing the capacity of this channel is to assume that the receiver performs
matched filtering followed by sampling at symbol rate (symbol matched
filtering). This yields a discrete-time channel in which each transmitted
symbol corresponds to one output sample. Liang & Veeravalli (2004) showed that
the capacity of this discrete-time channel grows logarithmically with the SNR,
with a capacity pre-log equal to . Here, is the number of
symbols transmitted within one fading block, and is the rank of the
covariance matrix of the discrete-time channel gains within each fading block.
In this paper, we show that symbol matched filtering is not a
capacity-achieving strategy for the underlying continuous-time channel.
Specifically, we analyze the capacity pre-log of the discrete-time channel
obtained by oversampling the continuous-time channel output, i.e., by sampling
it faster than at symbol rate. We prove that by oversampling by a factor two
one gets a capacity pre-log that is at least as large as . Since the
capacity pre-log corresponding to symbol-rate sampling is , our result
implies indeed that symbol matched filtering is not capacity achieving at high
SNR.Comment: To appear in the IEEE Transactions on Information Theor
LISA Data Analysis using MCMC methods
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously
detect many thousands of low frequency gravitational wave signals. This
presents a data analysis challenge that is very different to the one
encountered in ground based gravitational wave astronomy. LISA data analysis
requires the identification of individual signals from a data stream containing
an unknown number of overlapping signals. Because of the signal overlaps, a
global fit to all the signals has to be performed in order to avoid biasing the
solution. However, performing such a global fit requires the exploration of an
enormous parameter space with a dimension upwards of 50,000. Markov Chain Monte
Carlo (MCMC) methods offer a very promising solution to the LISA data analysis
problem. MCMC algorithms are able to efficiently explore large parameter
spaces, simultaneously providing parameter estimates, error analyses and even
model selection. Here we present the first application of MCMC methods to
simulated LISA data and demonstrate the great potential of the MCMC approach.
Our implementation uses a generalized F-statistic to evaluate the likelihoods,
and simulated annealing to speed convergence of the Markov chains. As a final
step we super-cool the chains to extract maximum likelihood estimates, and
estimates of the Bayes factors for competing models. We find that the MCMC
approach is able to correctly identify the number of signals present, extract
the source parameters, and return error estimates consistent with Fisher
information matrix predictions.Comment: 14 pages, 7 figure
Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models
Two new linear reconstruction techniques are developed to improve the resolution of images collected by ground-based telescopes imaging through atmospheric turbulence. The classical approach involves the application of constrained least squares (CLS) to the deconvolution from wavefront sensing (DWFS) technique. The new algorithm incorporates blur and noise models to select the appropriate regularization constant automatically. In all cases examined, the Newton-Raphson minimization converged to a solution in less than 10 iterations. The non-iterative Bayesian approach involves the development of a new vector Wiener filter which is optimal with respect to mean square error (MSE) for a non-stationary object class degraded by atmospheric turbulence and measurement noise. This research involves the first extension of the Wiener filter to account properly for shot noise and an unknown, random optical transfer function (OTF). The vector Wiener filter provides superior reconstructions when compared to the traditional scalar Wiener filter for a non-stationary object class. In addition, the new filter can provide a superresolution capability when the object\u27s Fourier domain statistics are known for spatial frequencies beyond the OTF cutoff. A generalized performance and robustness study of the vector Wiener filter showed that MSE performance is fundamentally limited by object signal-to-noise ratio (SNR) and correlation between object pixels
- …