79,562 research outputs found

    Blind adaptive constrained reduced-rank parameter estimation based on constant modulus design for CDMA interference suppression

    Get PDF
    This paper proposes a multistage decomposition for blind adaptive parameter estimation in the Krylov subspace with the code-constrained constant modulus (CCM) design criterion. Based on constrained optimization of the constant modulus cost function and utilizing the Lanczos algorithm and Arnoldi-like iterations, a multistage decomposition is developed for blind parameter estimation. A family of computationally efficient blind adaptive reduced-rank stochastic gradient (SG) and recursive least squares (RLS) type algorithms along with an automatic rank selection procedure are also devised and evaluated against existing methods. An analysis of the convergence properties of the method is carried out and convergence conditions for the reduced-rank adaptive algorithms are established. Simulation results consider the application of the proposed techniques to the suppression of multiaccess and intersymbol interference in DS-CDMA systems

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    Linear theory for filtering nonlinear multiscale systems with model error

    Full text link
    We study filtering of multiscale dynamical systems with model error arising from unresolved smaller scale processes. The analysis assumes continuous-time noisy observations of all components of the slow variables alone. For a linear model with Gaussian noise, we prove existence of a unique choice of parameters in a linear reduced model for the slow variables. The linear theory extends to to a non-Gaussian, nonlinear test problem, where we assume we know the optimal stochastic parameterization and the correct observation model. We show that when the parameterization is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa. Given the correct parameterization, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that parameters estimated online, as part of a filtering procedure, produce accurate filtering and equilibrium statistical prediction. In contrast, a linear regression based offline method, which fits the parameters to a given training data set independently from the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed
    • …
    corecore