1,423 research outputs found

    Robust Kalman tracking and smoothing with propagating and non-propagating outliers

    Full text link
    A common situation in filtering where classical Kalman filtering does not perform particularly well is tracking in the presence of propagating outliers. This calls for robustness understood in a distributional sense, i.e.; we enlarge the distribution assumptions made in the ideal model by suitable neighborhoods. Based on optimality results for distributional-robust Kalman filtering from Ruckdeschel[01,10], we propose new robust recursive filters and smoothers designed for this purpose as well as specialized versions for non-propagating outliers. We apply these procedures in the context of a GPS problem arising in the car industry. To better understand these filters, we study their behavior at stylized outlier patterns (for which they are not designed) and compare them to other approaches for the tracking problem. Finally, in a simulation study we discuss efficiency of our procedures in comparison to competitors.Comment: 27 pages, 12 figures, 2 table

    An Online Parallel and Distributed Algorithm for Recursive Estimation of Sparse Signals

    Full text link
    In this paper, we consider a recursive estimation problem for linear regression where the signal to be estimated admits a sparse representation and measurement samples are only sequentially available. We propose a convergent parallel estimation scheme that consists in solving a sequence of 1\ell_{1}-regularized least-square problems approximately. The proposed scheme is novel in three aspects: i) all elements of the unknown vector variable are updated in parallel at each time instance, and convergence speed is much faster than state-of-the-art schemes which update the elements sequentially; ii) both the update direction and stepsize of each element have simple closed-form expressions, so the algorithm is suitable for online (real-time) implementation; and iii) the stepsize is designed to accelerate the convergence but it does not suffer from the common trouble of parameter tuning in literature. Both centralized and distributed implementation schemes are discussed. The attractive features of the proposed algorithm are also numerically consolidated.Comment: Part of this work has been presented at The Asilomar Conference on Signals, Systems, and Computers, Nov. 201

    The Kalman Foundations of Adaptive Least Squares: Applications to Unemployment and Inflation

    Get PDF
    Adaptive Least Squares (ALS), i.e. recursive regression with asymptotically constant gain, as proposed by Ljung (1992), Sargent (1993, 1999), and Evans and Honkapohja (2001), is an increasingly widely-used method of estimating time-varying relationships and of proxying agents’ time-evolving expectations. This paper provides theoretical foundations for ALS as a special case of the generalized Kalman solution of a Time Varying Parameter (TVP) model. This approach is in the spirit of that proposed by Ljung (1992) and Sargent (1999), but unlike theirs, nests the rigorous Kalman solution of the elementary Local Level Model, and employs a very simple, yet rigorous, initialization. Unlike other approaches, the proposed method allows the asymptotic gain to be estimated by maximum likelihood (ML). The ALS algorithm is illustrated with univariate time series models of U.S. unemployment and inflation. Because the null hypothesis that the coefficients are in fact constant lies on the boundary of the permissible parameter space, the usual regularity conditions for the chi-square limiting distribution of likelihood-based test statistics are not met. Consequently, critical values of the Likelihood Ratio test statistics are established by Monte Carlo means and used to test the constancy of the parameters in the estimated models.Kalman Filter, Adaptive Learning, Adaptive Least Squares, Time Varying Parameter Model, Natural Unemployment Rate, Inflation Forecasting

    Generalized Stochastic Gradient Learning

    Get PDF
    We study the properties of generalized stochastic gradient (GSG) learning in forwardlooking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both di1er from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity

    Generalized Stochastic Gradient Learning

    Get PDF
    We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.adaptive learning, E-stability, recursive least squares, robust estimation

    Generalized Stochastic Gradient Learning

    Get PDF
    We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.

    Dynamic Assessment of Baroreflex Control of Heart Rate During Induction of Propofol Anesthesia Using a Point Process Method

    Get PDF
    In this article, we present a point process method to assess dynamic baroreflex sensitivity (BRS) by estimating the baroreflex gain as focal component of a simplified closed-loop model of the cardiovascular system. Specifically, an inverse Gaussian probability distribution is used to model the heartbeat interval, whereas the instantaneous mean is identified by linear and bilinear bivariate regressions on both the previous R−R intervals (RR) and blood pressure (BP) beat-to-beat measures. The instantaneous baroreflex gain is estimated as the feedback branch of the loop with a point-process filter, while the RRBP feedforward transfer function representing heart contractility and vasculature effects is simultaneously estimated by a recursive least-squares filter. These two closed-loop gains provide a direct assessment of baroreflex control of heart rate (HR). In addition, the dynamic coherence, cross bispectrum, and their power ratio can also be estimated. All statistical indices provide a valuable quantitative assessment of the interaction between heartbeat dynamics and hemodynamics. To illustrate the application, we have applied the proposed point process model to experimental recordings from 11 healthy subjects in order to monitor cardiovascular regulation under propofol anesthesia. We present quantitative results during transient periods, as well as statistical analyses on steady-state epochs before and after propofol administration. Our findings validate the ability of the algorithm to provide a reliable and fast-tracking assessment of BRS, and show a clear overall reduction in baroreflex gain from the baseline period to the start of propofol anesthesia, confirming that instantaneous evaluation of arterial baroreflex control of HR may yield important implications in clinical practice, particularly during anesthesia and in postoperative care.National Institutes of Health (U.S.) (Grant R01-HL084502)National Institutes of Health (U.S.) (Grant K25-NS05758)National Institutes of Health (U.S.) (Grant DP2- OD006454)National Institutes of Health (U.S.) (Grant T32NS048005)National Institutes of Health (U.S.) (Grant T32NS048005)National Institutes of Health (U.S.) (Grant R01-DA015644)Massachusetts General Hospital (Clinical Research Center, UL1 Grant RR025758

    Econometric methods related to parameter instability, long memory and forecasting

    Full text link
    The dissertation consists of three chapters on econometric methods related to parameter instability, forecasting and long memory. The first chapter introduces a new frequentist-based approach to forecast time series in the presence of in and out-of-sample breaks in the parameters. We model the parameters as random level shift (RLS) processes and introduce two features to make the changes in parameters forecastable. The first models the probability of shifts according to some covariates. The second incorporates a built-in mean reversion mechanism to the time path of the parameters. Our model can be cast into a non-linear non-Gaussian state-space framework. We use particle filtering and Monte Carlo expectation maximization algorithms to construct the estimates. We compare the forecasting performance with several alternative methods for different series. In all cases, our method allows substantial gains in forecasting accuracy. The second chapter extends the RLS model of Lu and Perron (2010) for the volatility of asset prices. The extensions are in two directions: a) we specify a time-varying probability of shifts as a function of large negative lagged returns; b) we incorporate a mean reverting mechanism so that the sign and magnitude of the jump component change according to the deviations of past jumps from their long run mean. We estimate the model using daily data on four major stock market indices. Compared to competing models, the modified RLS model yields the smallest mean square forecast errors overall. The third chapter proposes a method of inference about the mean or slope of a time trend that is robust to the unknown order of fractional integration of the errors. Our tests have the standard asymptotic normal distribution irrespective of the value of the long-memory parameter. Our procedure is based on using quasi-differences of the data and regressors based on a consistent estimate of the long-memory parameter obtained from the residuals of a least-squares regression. We use the exact local-Whittle estimator proposed by Shimotsu (2010). Simulation results show that our procedure delivers tests with good finite sample size and power, including cases with strong short-term correlations
    corecore