132,971 research outputs found
An Ensemble Kushner-Stratonovich (EnKS) Nonlinear Filter: Additive Particle Updates in Non-Iterative and Iterative Forms
Despite the cheap availability of computing resources enabling faster Monte
Carlo simulations, the potential benefits of particle filtering in revealing
accurate statistical information on the imprecisely known model parameters or
modeling errors of dynamical systems, based on limited time series data, have
not been quite realized. A major numerical bottleneck precipitating this
under-performance, especially for higher dimensional systems, is the
progressive particle impoverishment owing to weight collapse and the aim of the
current work is to address this problem by replacing weight-based updates
through additive ones. Thus, in the context of nonlinear filtering problems, a
novel additive particle update scheme, in its non-iterative and iterative
forms, is proposed based on manipulations of the innovation integral in the
governing Kushner-Stratonovich equation. Numerical evidence for the
identification of nonlinear and large dimensional dynamical systems indicates a
substantively superior performance of the non- iterative version of the EnKS
vis-\`a-vis most existing filters. The costlier iterative version, though
conceptually elegant, mostly appears to effect a marginal improvement in the
reconstruction accuracy over its non-iterative counterpart. Prominent in the
reported numerical comparisons are variants of the Ensemble Kalman Filter
(EnKF) that also use additive updates, albeit with many inherent limitations of
a Kalman filter
On Gaussian Channels with Feedback under Expected Power Constraints and with Non-Vanishing Error Probabilities
In this paper, we consider single- and multi-user Gaussian channels with
feedback under expected power constraints and with non-vanishing error
probabilities. In the first of two contributions, we study asymptotic
expansions for the additive white Gaussian noise (AWGN) channel with feedback
under the average error probability formalism. By drawing ideas from Gallager
and Nakibo\u{g}lu's work for the direct part and the meta-converse for the
converse part, we establish the -capacity and show that it depends
on in general and so the strong converse fails to hold.
Furthermore, we provide bounds on the second-order term in the asymptotic
expansion. We show that for any positive integer , the second-order term is
bounded between a term proportional to (where
is the -fold nested logarithm function) and a term proportional to
where is the blocklength. The lower bound on the
second-order term shows that feedback does provide an improvement in the
maximal achievable rate over the case where no feedback is available. In our
second contribution, we establish the -capacity region for the
AWGN multiple access channel (MAC) with feedback under the expected power
constraint by combining ideas from hypothesis testing, information spectrum
analysis, Ozarow's coding scheme, and power control.Comment: Submitted to the IEEE Transactions on Information Theory (revised in
September
Optimal Shrinkage of Singular Values Under Random Data Contamination
A low rank matrix X has been contaminated by uniformly distributed noise,
missing values, outliers and corrupt entries. Reconstruction of X from the
singular values and singular vectors of the contaminated matrix Y is a key
problem in machine learning, computer vision and data science. In this paper we
show that common contamination models (including arbitrary combinations of
uniform noise,missing values, outliers and corrupt entries) can be described
efficiently using a single framework. We develop an asymptotically optimal
algorithm that estimates X by manipulation of the singular values of Y , which
applies to any of the contamination models considered. Finally, we find an
explicit signal-to-noise cutoff, below which estimation of X from the singular
value decomposition of Y must fail, in a well-defined sense
Speech Recognition Front End Without Information Loss
Speech representation and modelling in high-dimensional spaces of acoustic
waveforms, or a linear transformation thereof, is investigated with the aim of
improving the robustness of automatic speech recognition to additive noise. The
motivation behind this approach is twofold: (i) the information in acoustic
waveforms that is usually removed in the process of extracting low-dimensional
features might aid robust recognition by virtue of structured redundancy
analogous to channel coding, (ii) linear feature domains allow for exact noise
adaptation, as opposed to representations that involve non-linear processing
which makes noise adaptation challenging. Thus, we develop a generative
framework for phoneme modelling in high-dimensional linear feature domains, and
use it in phoneme classification and recognition tasks. Results show that
classification and recognition in this framework perform better than analogous
PLP and MFCC classifiers below 18 dB SNR. A combination of the high-dimensional
and MFCC features at the likelihood level performs uniformly better than either
of the individual representations across all noise levels
Recommended from our members
A tutorial on cue combination and Signal Detection Theory: Using changes in sensitivity to evaluate how observers integrate sensory information
Many sensory inputs contain multiple sources of information (ācuesā), such as two sounds of different frequencies, or a voice heard in unison with moving lips. Often, each cue provides a separate estimate of the same physical attribute, such as the size or location of an object. An ideal observer can exploit such redundant sensory information to improve the accuracy of their perceptual judgments. For example, if each cue is modeled as an independent, Gaussian, random variable, then combining Ncues should provide up to a āN improvement in detection/discrimination sensitivity. Alternatively, a less efficient observer may base their decision on only a subset of the available information, and so gain little or no benefit from having access to multiple sources of information. Here we use Signal Detection Theory to formulate and compare various models of cue-combination, many of which are commonly used to explain empirical data. We alert the reader to the key assumptions inherent in each model, and provide formulas for deriving quantitative predictions. Code is also provided for simulating each model, allowing expected levels of measurement error to be quantified. Based on these results, it is shown that predicted sensitivity often differs surprisingly little between qualitatively distinct models of combination. This means that sensitivity alone is not sufficient for understanding decision efficiency, and the implications of this are discussed
Linear signal recovery from -bit-quantized linear measurements: precise analysis of the trade-off between bit depth and number of measurements
We consider the problem of recovering a high-dimensional structured signal
from independent Gaussian linear measurements each of which is quantized to
bits. Our interest is in linear approaches to signal recovery, where "linear"
means that non-linearity resulting from quantization is ignored and the
observations are treated as if they arose from a linear measurement model.
Specifically, the focus is on a generalization of a method for one-bit
observations due to Plan and Vershynin [\emph{IEEE~Trans. Inform. Theory,
\textbf{59} (2013), 482--494}]. At the heart of the present paper is a precise
characterization of the optimal trade-off between the number of measurements
and the bit depth per measurement given a total budget of bits when the goal is to minimize the -error in estimating the
signal. It turns out that the choice is optimal for estimating the unit
vector (direction) corresponding to the signal for any level of additive
Gaussian noise before quantization as well as for a specific model of
adversarial noise, while the choice is optimal for estimating the
direction and the norm (scale) of the signal. Moreover, Lloyd-Max quantization
is shown to be an optimal quantization scheme w.r.t. -estimation error.
Our analysis is corroborated by numerical experiments showing nearly perfect
agreement with our theoretical predictions. The paper is complemented by an
empirical comparison to alternative methods of signal recovery taking the
non-linearity resulting from quantization into account. The results of that
comparison point to a regime change depending on the noise level: in a
low-noise setting, linear signal recovery falls short of more sophisticated
competitors while being competitive in moderate- and high-noise settings
On color image quality assessment using natural image statistics
Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used
SADA: A General Framework to Support Robust Causation Discovery with Theoretical Guarantee
Causation discovery without manipulation is considered a crucial problem to a
variety of applications. The state-of-the-art solutions are applicable only
when large numbers of samples are available or the problem domain is
sufficiently small. Motivated by the observations of the local sparsity
properties on causal structures, we propose a general Split-and-Merge
framework, named SADA, to enhance the scalability of a wide class of causation
discovery algorithms. In SADA, the variables are partitioned into subsets, by
finding causal cut on the sparse causal structure over the variables. By
running mainstream causation discovery algorithms as basic causal solvers on
the subproblems, complete causal structure can be reconstructed by combining
the partial results. SADA benefits from the recursive division technique, since
each small subproblem generates more accurate result under the same number of
samples. We theoretically prove that SADA always reduces the scales of problems
without sacrifice on accuracy, under the condition of local causal sparsity and
reliable conditional independence tests. We also present sufficient condition
to accuracy enhancement by SADA, even when the conditional independence tests
are vulnerable. Extensive experiments on both simulated and real-world datasets
verify the improvements on scalability and accuracy by applying SADA together
with existing causation discovery algorithms
Robust Cosparse Greedy Signal Reconstruction for Compressive Sensing with Multiplicative and Additive Noise
Greedy algorithms are popular in compressive sensing for their high
computational efficiency. But the performance of current greedy algorithms can
be degenerated seriously by noise (both multiplicative noise and additive
noise). A robust version of greedy cosparse greedy algorithm (greedy analysis
pursuit) is presented in this paper. Comparing with previous methods, The
proposed robust greedy analysis pursuit algorithm is based on an optimization
model which allows both multiplicative noise and additive noise in the data
fitting constraint. Besides, a new stopping criterion that is derived. The new
algorithm is applied to compressive sensing of ECG signals. Numerical
experiments based on real-life ECG signals demonstrate the performance
improvement of the proposed greedy algorithms.Comment: This paper has been withdrawn by the author due to errors (missed
\gamma in the 2nd term on the right) in equation 10, equation 11, and
equation 12, which leads to further error in Algorithm
Do scenario context and question order influence WTP? The application of a model of uncertain WTP to the CV of the morbidity impacts of air pollution
This paper presents a general framework for modelling responses to contingent valuation questions when respondents are uncertain about their ātrueā WTP. These
models are applied to a contingent valuation data set recording respondentsā WTP to avoid episodes of ill-health. Two issues are addressed. First, whether the order in
which a respondent answers a series of contingent valuation questions influences their WTP. Second, whether the context in which a good is valued (in this case the information the respondent is given concerning the cause of the ill-health episode or the policy put into place to avoid that episode) influences respondentsā WTP.
The results of the modelling exercise suggest that neither valuation order nor the context included in the valuation scenario impact on the precision with which respondents answer the contingent valuation questions. Similarly, valuation order does not appear to influence the mean or median WTP of the sample. In contrast, it is shown that in some cases, the inclusion of richer context significantly shifts both the mean and median WTP of the sample. This result has implications for the application of benefits transfer. Since, WTP to avoid an episode of ill-health cannot be shown to be independent of the context in which it is valued, the validity of transferring benefits of avoided ill-health episodes from one policy context to another must be called into question
- ā¦