14 research outputs found
Constructive function approximation: theory and practice
In this paper we study the theoretical limits of finite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-off between the global error and the partial error during the iterations of the solution. These results are then specialized to constructive function approximation using sigmoidal neural networks. The emphasis then shifts to the implementation issues associated with the problem of achieving given approximation errors when using a finite number of nodes and a finite data set for training
Neural networks in fault detection: a case study
We study the applications of neural nets in the area of fault detection in real vibrational data. The study is one of the first to include a large set of real vibrational data and to illustrate the potential as well as the limitations of neural networks for fault detection
Recommended from our members
A one-class classifier for identifying urban areas in remotely-sensed data
For many remote sensing applications, land cover can be determined by using spectral information alone. Identifying urban areas, however, requires the use of texture information since these areas are not generally characterized by a unique spectral signature. We have designed a one-class classifier to discriminate between urban and non-urban data. The advantage to using our classification technique is that principles of both statistical and adaptive pattern recognition are used simultaneously. This prevents new data that is completely dissimilar from the training data from being incorrectly classified. At the same time it allows decision boundary adaptation to reduce classification error in overlap areas of the feature space. Results will be illustrated using a LANDSAT scene of the city of Albuquerque
Recommended from our members
An adaptive algorithm for modifying hyperellipsoidal decision surfaces
The LVQ algorithm is a common method which allows a set of reference vectors for a distance classifier to adapt to a given training set. We have developed a similar learning algorithm, LVQ-MM, which manipulates hyperellipsoidal cluster boundaries as opposed to reference vectors. Regions of the input feature space are first enclosed by ellipsoidal decision boundaries, and then these boundaries are iteratively modified to reduce classification error. Results obtained by classifying the Iris data set are provided