35 research outputs found

    Weak in the NEES?: Auto-tuning Kalman Filters with Bayesian Optimization

    Get PDF
    Kalman filters are routinely used for many data fusion applications including navigation, tracking, and simultaneous localization and mapping problems. However, significant time and effort is frequently required to tune various Kalman filter model parameters, e.g. process noise covariance, pre-whitening filter models for non-white noise, etc. Conventional optimization techniques for tuning can get stuck in poor local minima and can be expensive to implement with real sensor data. To address these issues, a new "black box" Bayesian optimization strategy is developed for automatically tuning Kalman filters. In this approach, performance is characterized by one of two stochastic objective functions: normalized estimation error squared (NEES) when ground truth state models are available, or the normalized innovation error squared (NIS) when only sensor data is available. By intelligently sampling the parameter space to both learn and exploit a nonparametric Gaussian process surrogate function for the NEES/NIS costs, Bayesian optimization can efficiently identify multiple local minima and provide uncertainty quantification on its results.Comment: Final version presented at FUSION 2018 Conference, Cambridge, UK, July 2018 (submitted June 1, 2018

    Volatility Model Choice for Sub-Saharan Frontier Equity Markets - A Markov Regime Switching Bayesian Approach

    Get PDF
    We adopt a granular approach to estimating the risk of equity returns in sub-Saharan African frontier equity markets under the assumption that, returns are influenced by developments in the underlying economy. Four countries were studied – Botswana, Ghana, Kenya and Nigeria. We found heterogeneity in the evolution of volatility across these markets and also that two-regime switching volatility models describe better the heteroscedastic returns generating processes in these markets using the deviance information criteria. We backtest the results to assess whether the models are a good fit for the data. We concluded that, the selected models are the most suitable for predicting the volatility of future returns in the markets studied.

    A Method for Automatic and Objective Scoring of Bradykinesia Using Orientation Sensors and Classification Algorithms

    Get PDF
    Correct assessment of bradykinesia is a key element in the diagnosis and monitoring of Parkinson's disease. Its evaluation is based on a careful assessment of symptoms and it is quantified using rating scales, where the Movement Disorders Society-Sponsored Revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS) is the gold standard. Regardless of their importance, the bradykinesia-related items show low agreement between different evaluators. In this study, we design an applicable tool that provides an objective quantification of bradykinesia and that evaluates all characteristics described in the MDS-UPDRS. Twenty-five patients with Parkinson's disease performed three of the five bradykinesia-related items of the MDS-UPDRS. Their movements were assessed by four evaluators and were recorded with a nine degrees-of-freedom sensor. Sensor fusion was employed to obtain a 3-D representation of movements. Based on the resulting signals, a set of features related to the characteristics described in the MDS-UPDRS was defined. Feature selection methods were employed to determine the most important features to quantify bradykinesia. The features selected were used to train support vector machine classifiers to obtain an automatic score of the movements of each patient. The best results were obtained when seven features were included in the classifiers. The classification errors for finger tapping, diadochokinesis and toe tapping were 15-16.5%, 9.3-9.8%, and 18.2-20.2% smaller than the average interrater scoring error, respectively. The introduction of objective scoring in the assessment of bradykinesia might eliminate inconsistencies within evaluators and interrater assessment disagreements and might improve the monitoring of movement disorders

    Elagage d'un perceptron multicouches : utilisation de l'analyse de la variance de la sensibilit\'e des param\`etres

    Full text link
    The stucture determination of a neural network for the modelisation of a system remain the core of the problem. Within this framework, we propose a pruning algorithm of the network based on the use of the analysis of the sensitivity of the variance of all the parameters of the network. This algorithm will be tested on two examples of simulation and its performances will be compared with three other algorithms of pruning of the literatureComment: 6 page

    Weak in the NEES?: Auto-Tuning Kalman Filters with Bayesian Optimization

    Get PDF
    ISIF Kalman filters are routinely used for many data fusion applications including navigation, tracking, and simultaneous localization and mapping problems. However, significant time and effort is frequently required to tune various Kalman filter model parameters, e.g. Process noise covariance, pre-whitening filter models for non-white noise, etc. Conventional optimization techniques for tuning can get stuck in poor local minima and can be expensive to implement with real sensor data. To address these issues, a new 'black box' Bayesian optimization strategy is developed for automatically tuning Kalman filters. In this approach, performance is characterized by one of two stochastic objective functions: Normalized estimation error squared (NEES) when ground truth state models are available, or the normalized innovation error squared (NIS) when only sensor data is available. By intelligently sampling the parameter space to both learn and exploit a nonparametric Gaussian process surrogate function for the NEESINIS costs, Bayesian optimization can efficiently identify multiple local minima and provide uncertainty quantification on its results
    corecore