1,710 research outputs found

    Function Approximation With Multilayered Perceptrons Using L1 Criterion

    Get PDF
    Kaedah ralat kuasa dua terkecil atau kaedah kriteria L2 biasanya digunakan bagi persoalan penghampiran fungsian dan pengitlakan di dalam algoritma perambatan balik ralat. Tujuan kajian ini adalah untuk mempersembahkan suatu kriteria ralat mutlak terkecil bagi perambatan balik sigmoid selain daripada kriteria ralat kuasa dua terkecil yang biasa digunakan. Kami membentangkan struktur fungsi ralat untuk diminimumkan serta hasil pembezaan terhadap pemberat yang akan dikemaskinikan. Tumpuan ·kajian ini ialah terhadap model perseptron multilapisan yang mempunyai satu lapisan tersembunyi tetapi perlaksanaannya boleh dilanjutkan kepada model yang mempunyai dua atau lebih lapisan tersembunyi. The least squares error or L2 criterion approach has been commonly used in functional approximation and generalization in the error backpropagation algorithm. The purpose of this study is to present an absolute error criterion for the sigmoidal backpropagatioll I rather than the usual least squares error criterion. We present the structure of the error function to be minimized and its derivatives with respect to the weights to be updated. The focus in the study is on the single hidden layer multilayer perceptron (MLP) but the implementation may be extended to include two or more hidden layers

    Flood. An open source neural networks C++ library

    Get PDF
    The multilayer perceptron is an important model of neural network, and much of the literature in the eld is referred to that model. The multilayer perceptron has found a wide range of applications, which include function re- gression, pattern recognition, time series prediction, optimal control, optimal shape design or inverse problems. All these problems can be formulated as variational problems. That neural network can learn either from databases or from mathematical models. Flood is a comprehensive class library which implements the multilayer perceptron in the C++ programming language. It has been developed follow- ing the functional analysis and calculus of variations theories. In this regard, this software tool can be used for the whole range of applications mentioned above. Flood also provides a workaround for the solution of function opti- mization problems

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    Analysis of Neural Networks in Terms of Domain Functions

    Get PDF
    Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a mysterious "black box". Although much research has already been done to "open the box," there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network's function and, depending on the chosen base functions, it may also provide an insight into the neural network' s inner "reasoning." It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor

    Analysis of artificial neural networks in the diagnosing of breast cancer using fine needle aspirates

    Get PDF
    This thesis examines how Artificial Neural Networks can be used to classify a set of samples from a fine needle aspirate dataset. The dataset is composed of various different attributes, each of which are used to come to the conclusion as to whether a sample is benign or malignant. To automate the process of analyzing the various attributes and coming to a correct prediction, a neural network was implemented. First, a Feedforward Neural Network was trained with the dataset using a Backpropagation training method and an activation sigmoid function with one hidden layer in the architecture of the network. After training, the network performed a 10-fold cross validation to determine which model had the lower error scores and would perform the best on the data. The data was looped through the model and the trained network classified the samples as either benign or malignant. Once classified, the overall accuracy, specificity and sensitivity were analyzed to measure performance. Three other neural networks were compared to the Feedforward Network to see how they performed. These three neural networks included a NEAT Neural Network, a Support Vector Machine, and a Radial Basis Function Neural Network

    Training Methods for Shunting Inhibitory Artificial Neural Networks

    Get PDF
    This project investigates a new class of high-order neural networks called shunting inhibitory artificial neural networks (SIANN\u27s) and their training methods. SIANN\u27s are biologically inspired neural networks whose dynamics are governed by a set of coupled nonlinear differential equations. The interactions among neurons are mediated via a nonlinear mechanism called shunting inhibition, which allows the neurons to operate as adaptive nonlinear filters. The project\u27s main objective is to devise training methods, based on error backpropagation type of algorithms, which would allow SIANNs to be trained to perform feature extraction for classification and nonlinear regression tasks. The training algorithms developed will simplify the task of designing complex, powerful neural networks for applications in pattern recognition, image processing, signal processing, machine vision and control. The five training methods adapted in this project for SIANN\u27s are error-backpropagation based on gradient descent (GD), gradient descent with variable learning rate (GDV), gradient descent with momentum (GDM), gradient descent with direct solution step (GDD) and APOLEX algorithm. SIANN\u27s and these training methods are implemented in MATLAB. Testing on several benchmarks including the parity problems, classification of 2-D patterns, and function approximation shows that SIANN\u27s trained using these methods yield comparable or better performance with multilayer perceptrons (MLP\u27s)
    corecore