1,734 research outputs found
Variational inference for robust sequential learning of multilayered perceptron neural network
U radu je prikazan i izveden novi sekvencijalni algoritam za obuÄavanje viÅ”eslojnog perceptrona u prisustvu autlajera. Autlajeri predstavljaju znaÄajan problem, posebno ukoliko sprovodimo sekvencijalno obuÄavanje ili obuÄavanje u realnom vremenu. Linearizovani Kalmanov filtar robustan na autlajere (LKF-RA), je statistiÄki generativni model u kome je matrica kovarijansi Å”uma merenja modelovana kao stohastiÄki proces, a apriorna informacija usvojena kao inverzna ViÅ”artova raspodela. IzvoÄenje svih jednakosti je bazirano na prvim principima Bajesovske metodologije. Da bi se reÅ”io korak modifikacije primenjen je varijacioni metod, u kome reÅ”enje problema tražimo u familiji raspodela odgovarajuÄe funkcionalne forme. Eksperimentalni rezultati primene LKF-RA, dobijeni koriÅ”Äenjem stvarnih vremenskih serija, pokazuju da je LKF-RA bolji od konvencionalnog linearizovanog Kalmanovog filtra u smislu generisanja niže greÅ”ke na test skupu podataka. ProseÄna vrednost poboljÅ”anja odreÄena u eksperimentalnom procesu je 7%.We derive a new sequential learning algorithm for Multilayered Perceptron (MLP) neural network robust to outliers. Presence of outliers in data results in failure of the model especially if data processing is performed on-line or in real time. Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is given as inverse Wishart distribution. Derivation of expressions comes straight form first principles, within Bayesian framework. Analytical intractability of Bayes' update step is solved using Variational Inference (VI). Experimental results obtained using real world stochastic data show that MLP network trained with proposed algorithm achieves low error and average improvement rate of 7% when compared directly to conventional EKF learning algorithm
Variational inference for robust sequential learning of multilayered perceptron neural network
U radu je prikazan i izveden novi sekvencijalni algoritam za obuÄavanje viÅ”eslojnog perceptrona u prisustvu autlajera. Autlajeri predstavljaju znaÄajan problem, posebno ukoliko sprovodimo sekvencijalno obuÄavanje ili obuÄavanje u realnom vremenu. Linearizovani Kalmanov filtar robustan na autlajere (LKF-RA), je statistiÄki generativni model u kome je matrica kovarijansi Å”uma merenja modelovana kao stohastiÄki proces, a apriorna informacija usvojena kao inverzna ViÅ”artova raspodela. IzvoÄenje svih jednakosti je bazirano na prvim principima Bajesovske metodologije. Da bi se reÅ”io korak modifikacije primenjen je varijacioni metod, u kome reÅ”enje problema tražimo u familiji raspodela odgovarajuÄe funkcionalne forme. Eksperimentalni rezultati primene LKF-RA, dobijeni koriÅ”Äenjem stvarnih vremenskih serija, pokazuju da je LKF-RA bolji od konvencionalnog linearizovanog Kalmanovog filtra u smislu generisanja niže greÅ”ke na test skupu podataka. ProseÄna vrednost poboljÅ”anja odreÄena u eksperimentalnom procesu je 7%.We derive a new sequential learning algorithm for Multilayered Perceptron (MLP) neural network robust to outliers. Presence of outliers in data results in failure of the model especially if data processing is performed on-line or in real time. Extended Kalman filter robust to outliers (EKF-OR) is probabilistic generative model in which measurement noise covariance is modeled as stochastic process over the set of symmetric positive-definite matrices in which prior is given as inverse Wishart distribution. Derivation of expressions comes straight form first principles, within Bayesian framework. Analytical intractability of Bayes' update step is solved using Variational Inference (VI). Experimental results obtained using real world stochastic data show that MLP network trained with proposed algorithm achieves low error and average improvement rate of 7% when compared directly to conventional EKF learning algorithm
Improved Heterogeneous Distance Functions
Instance-based learning techniques typically handle continuous and linear
input values well, but often do not handle nominal input attributes
appropriately. The Value Difference Metric (VDM) was designed to find
reasonable distance values between nominal attribute values, but it largely
ignores continuous attributes, requiring discretization to map continuous
values into nominal values. This paper proposes three new heterogeneous
distance functions, called the Heterogeneous Value Difference Metric (HVDM),
the Interpolated Value Difference Metric (IVDM), and the Windowed Value
Difference Metric (WVDM). These new distance functions are designed to handle
applications with nominal attributes, continuous attributes, or both. In
experiments on 48 applications the new distance metrics achieve higher
classification accuracy on average than three previous distance functions on
those datasets that have both nominal and continuous attributes.Comment: See http://www.jair.org/ for an online appendix and other files
accompanying this articl
Data-driven Soft Sensors in the Process Industry
In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work
Robust artificial neural networks and outlier detection. Technical report
Large outliers break down linear and nonlinear regression models. Robust
regression methods allow one to filter out the outliers when building a model.
By replacing the traditional least squares criterion with the least trimmed
squares criterion, in which half of data is treated as potential outliers, one
can fit accurate regression models to strongly contaminated data.
High-breakdown methods have become very well established in linear regression,
but have started being applied for non-linear regression only recently. In this
work, we examine the problem of fitting artificial neural networks to
contaminated data using least trimmed squares criterion. We introduce a
penalized least trimmed squares criterion which prevents unnecessary removal of
valid data. Training of ANNs leads to a challenging non-smooth global
optimization problem. We compare the efficiency of several derivative-free
optimization methods in solving it, and show that our approach identifies the
outliers correctly when ANNs are used for nonlinear regression
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Noise Reduction in Images: Some Recent Edge-Preserving Methods
We introduce some recent and very recent smoothing methods which focus on the preservation of boundaries, spikes and canyons in presence of noise. We try to point out basic principles they have in common; the most important one is the robustness aspect. It is reflected by the use of `cup functions' in the statistical loss functions instead of squares; such cup functions were introduced early in robust statistics to down weight outliers. Basically, they are variants of truncated squares. We discuss all the methods in the common framework of `energy functions', i.e we associate to (most of) the algorithms a `loss function' in such a fashion that the output of the algorithm or the `estimate' is a global or local minimum of this loss function. The third aspect we pursue is the correspondence between loss functions and their local minima and nonlinear filters. We shall argue that the nonlinear filters can be interpreted as variants of gradient descent on the loss functions. This way we can show that some (robust) M-estimators and some nonlinear filters produce almost the same result
- ā¦