29,835 research outputs found

    Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation

    Full text link
    In this paper, we present the optimization formulation of the Kalman filtering and smoothing problems, and use this perspective to develop a variety of extensions and applications. We first formulate classic Kalman smoothing as a least squares problem, highlight special structure, and show that the classic filtering and smoothing algorithms are equivalent to a particular algorithm for solving this problem. Once this equivalence is established, we present extensions of Kalman smoothing to systems with nonlinear process and measurement models, systems with linear and nonlinear inequality constraints, systems with outliers in the measurements or sudden changes in the state, and systems where the sparsity of the state sequence must be accounted for. All extensions preserve the computational efficiency of the classic algorithms, and most of the extensions are illustrated with numerical examples, which are part of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure

    Fast Ensemble Smoothing

    Full text link
    Smoothing is essential to many oceanographic, meteorological and hydrological applications. The interval smoothing problem updates all desired states within a time interval using all available observations. The fixed-lag smoothing problem updates only a fixed number of states prior to the observation at current time. The fixed-lag smoothing problem is, in general, thought to be computationally faster than a fixed-interval smoother, and can be an appropriate approximation for long interval-smoothing problems. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm produces a linear time solution to the interval smoothing problem with a fixed factor, and the second one produces a fixed-lag solution that is independent of the lag length. Identical-twin experiments conducted with the Lorenz-95 model show that for lag lengths approximately equal to the error doubling time, or for long intervals the proposed methods can provide significant computational savings. These results suggest that ensemble methods yield both fixed-interval and fixed-lag smoothing solutions that cost little additional effort over filtering and model propagation, in the sense that in practical ensemble application the additional increment is a small fraction of either filtering or model propagation costs. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing and may be advantageous when memory is not an issue

    FFTPL: An Analytic Placement Algorithm Using Fast Fourier Transform for Density Equalization

    Full text link
    We propose a flat nonlinear placement algorithm FFTPL using fast Fourier transform for density equalization. The placement instance is modeled as an electrostatic system with the analogy of density cost to the potential energy. A well-defined Poisson's equation is proposed for gradient and cost computation. Our placer outperforms state-of-the-art placers with better solution quality and efficiency

    Penalized Clustering of Large Scale Functional Data with Multiple Covariates

    Full text link
    In this article, we propose a penalized clustering method for large scale data with multiple covariates through a functional data approach. In the proposed method, responses and covariates are linked together through nonparametric multivariate functions (fixed effects), which have great flexibility in modeling a variety of function features, such as jump points, branching, and periodicity. Functional ANOVA is employed to further decompose multivariate functions in a reproducing kernel Hilbert space and provide associated notions of main effect and interaction. Parsimonious random effects are used to capture various correlation structures. The mixed-effect models are nested under a general mixture model, in which the heterogeneity of functional data is characterized. We propose a penalized Henderson's likelihood approach for model-fitting and design a rejection-controlled EM algorithm for the estimation. Our method selects smoothing parameters through generalized cross-validation. Furthermore, the Bayesian confidence intervals are used to measure the clustering uncertainty. Simulation studies and real-data examples are presented to investigate the empirical performance of the proposed method. Open-source code is available in the R package MFDA

    Parametrization and penalties in spline models with an application to survival analysis

    Get PDF
    In this paper we show how a simple parametrization, built from the definition of cubic splines, can aid in the implementation and interpretation of penalized spline models, whatever configuration of knots we choose to use. We call this parametrization value-first derivative parametrization. We perform Bayesian inference by exploring the natural link between quadratic penalties and Gaussian priors. However, a full Bayesian analysis seems feasible only for some penalty functionals. Alternatives include empirical Bayes methods involving model selection type criteria. The proposed methodology is illustrated by an application to survival analysis where the usual Cox model is extended to allow for time-varying regression coefficients
    corecore