3 research outputs found
Variational inference for robust sequential learning of multilayered perceptron neural network
U radu je prikazan i izveden novi sekvencijalni algoritam za obuÄavanje viÅ”eslojnog perceptrona u prisustvu autlajera. Autlajeri predstavljaju znaÄajan problem, posebno ukoliko sprovodimo sekvencijalno obuÄavanje ili obuÄavanje u realnom vremenu. Linearizovani Kalmanov filtar robustan na autlajere (LKF-RA), je statistiÄki generativni model u kome je matrica kovarijansi Å”uma merenja modelovana kao stohastiÄki proces, a apriorna informacija usvojena kao inverzna ViÅ”artova raspodela. IzvoÄenje svih jednakosti je bazirano na prvim principima Bajesovske metodologije. Da bi se reÅ”io korak modifikacije primenjen je varijacioni metod, u kome reÅ”enje problema tražimo u familiji raspodela odgovarajuÄe funkcionalne forme. Eksperimentalni rezultati primene LKF-RA, dobijeni koriÅ”Äenjem stvarnih vremenskih serija, pokazuju da je LKF-RA bolji od konvencionalnog linearizovanog Kalmanovog filtra u smislu generisanja niže greÅ”ke na test skupu podataka. ProseÄna vrednost poboljÅ”anja odreÄena u eksperimentalnom procesu je 7%.We derive a new sequential learning algorithm for Multilayered Perceptron (MLP) neural network robust to outliers. Presence of outliers in data results in failure of the model especially if data processing is performed on-line or in real time. Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is given as inverse Wishart distribution. Derivation of expressions comes straight form first principles, within Bayesian framework. Analytical intractability of Bayes' update step is solved using Variational Inference (VI). Experimental results obtained using real world stochastic data show that MLP network trained with proposed algorithm achieves low error and average improvement rate of 7% when compared directly to conventional EKF learning algorithm
Variational inference for robust sequential learning of multilayered perceptron neural network
U radu je prikazan i izveden novi sekvencijalni algoritam za obuÄavanje viÅ”eslojnog perceptrona u prisustvu autlajera. Autlajeri predstavljaju znaÄajan problem, posebno ukoliko sprovodimo sekvencijalno obuÄavanje ili obuÄavanje u realnom vremenu. Linearizovani Kalmanov filtar robustan na autlajere (LKF-RA), je statistiÄki generativni model u kome je matrica kovarijansi Å”uma merenja modelovana kao stohastiÄki proces, a apriorna informacija usvojena kao inverzna ViÅ”artova raspodela. IzvoÄenje svih jednakosti je bazirano na prvim principima Bajesovske metodologije. Da bi se reÅ”io korak modifikacije primenjen je varijacioni metod, u kome reÅ”enje problema tražimo u familiji raspodela odgovarajuÄe funkcionalne forme. Eksperimentalni rezultati primene LKF-RA, dobijeni koriÅ”Äenjem stvarnih vremenskih serija, pokazuju da je LKF-RA bolji od konvencionalnog linearizovanog Kalmanovog filtra u smislu generisanja niže greÅ”ke na test skupu podataka. ProseÄna vrednost poboljÅ”anja odreÄena u eksperimentalnom procesu je 7%.We derive a new sequential learning algorithm for Multilayered Perceptron (MLP) neural network robust to outliers. Presence of outliers in data results in failure of the model especially if data processing is performed on-line or in real time. Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is given as inverse Wishart distribution. Derivation of expressions comes straight form first principles, within Bayesian framework. Analytical intractability of Bayes' update step is solved using Variational Inference (VI). Experimental results obtained using real world stochastic data show that MLP network trained with proposed algorithm achieves low error and average improvement rate of 7% when compared directly to conventional EKF learning algorithm
Robust artificial neural networks and outlier detection. Technical report
Large outliers break down linear and nonlinear regression models. Robust
regression methods allow one to filter out the outliers when building a model.
By replacing the traditional least squares criterion with the least trimmed
squares criterion, in which half of data is treated as potential outliers, one
can fit accurate regression models to strongly contaminated data.
High-breakdown methods have become very well established in linear regression,
but have started being applied for non-linear regression only recently. In this
work, we examine the problem of fitting artificial neural networks to
contaminated data using least trimmed squares criterion. We introduce a
penalized least trimmed squares criterion which prevents unnecessary removal of
valid data. Training of ANNs leads to a challenging non-smooth global
optimization problem. We compare the efficiency of several derivative-free
optimization methods in solving it, and show that our approach identifies the
outliers correctly when ANNs are used for nonlinear regression