3,977 research outputs found
Robust Kalman tracking and smoothing with propagating and non-propagating outliers
A common situation in filtering where classical Kalman filtering does not
perform particularly well is tracking in the presence of propagating outliers.
This calls for robustness understood in a distributional sense, i.e.; we
enlarge the distribution assumptions made in the ideal model by suitable
neighborhoods. Based on optimality results for distributional-robust Kalman
filtering from Ruckdeschel[01,10], we propose new robust recursive filters and
smoothers designed for this purpose as well as specialized versions for
non-propagating outliers. We apply these procedures in the context of a GPS
problem arising in the car industry. To better understand these filters, we
study their behavior at stylized outlier patterns (for which they are not
designed) and compare them to other approaches for the tracking problem.
Finally, in a simulation study we discuss efficiency of our procedures in
comparison to competitors.Comment: 27 pages, 12 figures, 2 table
Just Another Gibbs Additive Modeller: Interfacing JAGS and mgcv
The BUGS language offers a very flexible way of specifying complex
statistical models for the purposes of Gibbs sampling, while its JAGS variant
offers very convenient R integration via the rjags package. However, including
smoothers in JAGS models can involve some quite tedious coding, especially for
multivariate or adaptive smoothers. Further, if an additive smooth structure is
required then some care is needed, in order to centre smooths appropriately,
and to find appropriate starting values. R package mgcv implements a wide range
of smoothers, all in a manner appropriate for inclusion in JAGS code, and
automates centring and other smooth setup tasks. The purpose of this note is to
describe an interface between mgcv and JAGS, based around an R function,
`jagam', which takes a generalized additive model (GAM) as specified in mgcv
and automatically generates the JAGS model code and data required for inference
about the model via Gibbs sampling. Although the auto-generated JAGS code can
be run as is, the expectation is that the user would wish to modify it in order
to add complex stochastic model components readily specified in JAGS. A simple
interface is also provided for visualisation and further inference about the
estimated smooth components using standard mgcv functionality. The methods
described here will be un-necessarily inefficient if all that is required is
fully Bayesian inference about a standard GAM, rather than the full flexibility
of JAGS. In that case the BayesX package would be more efficient.Comment: Submitted to the Journal of Statistical Softwar
On multi-view learning with additive models
In many scientific settings data can be naturally partitioned into variable
groupings called views. Common examples include environmental (1st view) and
genetic information (2nd view) in ecological applications, chemical (1st view)
and biological (2nd view) data in drug discovery. Multi-view data also occur in
text analysis and proteomics applications where one view consists of a graph
with observations as the vertices and a weighted measure of pairwise similarity
between observations as the edges. Further, in several of these applications
the observations can be partitioned into two sets, one where the response is
observed (labeled) and the other where the response is not (unlabeled). The
problem for simultaneously addressing viewed data and incorporating unlabeled
observations in training is referred to as multi-view transductive learning. In
this work we introduce and study a comprehensive generalized fixed point
additive modeling framework for multi-view transductive learning, where any
view is represented by a linear smoother. The problem of view selection is
discussed using a generalized Akaike Information Criterion, which provides an
approach for testing the contribution of each view. An efficient implementation
is provided for fitting these models with both backfitting and local-scoring
type algorithms adjusted to semi-supervised graph-based learning. The proposed
technique is assessed on both synthetic and real data sets and is shown to be
competitive to state-of-the-art co-training and graph-based techniques.Comment: Published in at http://dx.doi.org/10.1214/08-AOAS202 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Approximation of fuzzy numbers by convolution method
In this paper we consider how to use the convolution method to construct
approximations, which consist of fuzzy numbers sequences with good properties,
for a general fuzzy number. It shows that this convolution method can generate
differentiable approximations in finite steps for fuzzy numbers which have
finite non-differentiable points. In the previous work, this convolution method
only can be used to construct differentiable approximations for continuous
fuzzy numbers whose possible non-differentiable points are the two endpoints of
1-cut. The constructing of smoothers is a key step in the construction process
of approximations. It further points out that, if appropriately choose the
smoothers, then one can use the convolution method to provide approximations
which are differentiable, Lipschitz and preserve the core at the same time.Comment: Submitted to Fuzzy Sets and System at Sep 18 201
- …