54 research outputs found

    Distributed Fault Tolerance in Optimal Interpolative Nets

    Get PDF
    The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper

    Distributed Fault Tolerance in Optimal Interpolative Nets

    Get PDF
    The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper

    Distributed fault tolerance in optimal interpolative nets

    Full text link

    Fault Tolerant Training for Optimal Interpolative Nets

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault-tolerant OI net is presented in a recursive formal, allowing for relatively short training times. A simulated fault-tolerant OI net is tested on a navigation satellite selection proble

    Fault Tolerant Training for Optimal Interpolative Nets

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault-tolerant OI net is presented in a recursive formal, allowing for relatively short training times. A simulated fault-tolerant OI net is tested on a navigation satellite selection proble

    A Fault-Tolerant Optimal Interpolative Net

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI Net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault tolerant OI Net is presented in a recursive format, allowing for relatively short training times. A simulated fault tolerant OI Net is tested on a navigation satellite selective problem

    A Fault-Tolerant Optimal Interpolative Net

    Get PDF
    The optimal interpolative (OI) classification network is extended to include fault tolerance and make the network more robust to the loss of a neuron. The OI Net has the characteristic that the training data are fit with no more neurons than necessary. Fault tolerance further reduces the number of neurons generated during the learning procedure while maintaining the generalization capabilities of the network. The learning algorithm for the fault tolerant OI Net is presented in a recursive format, allowing for relatively short training times. A simulated fault tolerant OI Net is tested on a navigation satellite selective problem

    Navigation Satellite Selection Using Neural Networks

    Get PDF
    The application of neural networks to optimal satellite subset selection for navigation use is discussed. The methods presented in this paper are general enough to be applicable regardless of how many satellite signals are being processed by the receiver. The optimal satellite subset is chosen by minimizing a quantity known as Geometric Dilution of Precision (GDOP), which is given by the trace of the inverse of the measurement matrix. An artificial neural network learns the functional relationships between the entries of a measurement matrix and the eigenvalues of its inverse, and thus generates GDOP without inverting a matrix. Simulation results are given, and the computational benefit of neural network-based satellite selection is discussed

    Training Radial Basis Neural Networks with the Extended Kalman Filter

    Get PDF
    Radial basis function (RBF) neural networks provide attractive possibilities for solving signal processing and pattern classification problems. Several algorithms have been proposed for choosing the RBF prototypes and training the network. The selection of the RBF prototypes and the network weights can be viewed as a system identification problem. As such, this paper proposes the use of the extended Kalman filter for the learning procedure. After the user chooses how many prototypes to include in the network, the Kalman filter simultaneously solves for the prototype vectors and the weight matrix. A decoupled extended Kalman filter is then proposed in order to decrease the computational effort of the training algorithm. Simulation results are presented on reformulated radial basis neural networks as applied to the Iris classification problem. It is shown that the use of the Kalman filter results in better learning than conventional RBF networks and faster learning than gradient descent

    Identification of control chart patterns using neural networks

    Get PDF
    To produce products with consistent quality, manufacturing processes need to be closely monitored for any deviations in the process. Proper analysis of control charts that are used to determine the state of the process not only requires a thorough knowledge and understanding of the underlying distribution theories associated with control charts, but also the experience of an expert in decision making. The present work proposes a modified backpropagation neural network methodology to identify and interpret various patterns of variations that can occur in a manufacturing process. Control charts primarily in the form of X-bar chart are widely used to identify the situations when control actions will be needed for manufacturing systems. Various types of patterns are observed in control charts. Identification of these control chart patterns (CCPs) can provide clues to potential quality problems in the manufacturing process. Each type of control chart pattern has its own geometric shape and various related features can represent this shape. This project formulates Shewhart mean (X-bar) and range (R) control charts for diagnosis and interpretation by artificial neural networks. Neural networks are trained to discriminate between samples from probability distributions considered within control limits and those which have shifted in both location and variance. Neural networks are also trained to recognize samples and predict future points from processes which exhibit long term or cyclical drift. The advantages and disadvantages of neural control charts compared to traditional statistical process control are iscussed. In processes, the causes of variations may be categorized as chance (unassignable) causes and special (assignable) causes. The variations due to chance causes are inevitable, and difficult to detect and identify. On the other hand, the variations due to special causes prevent the process being a stable and predictable. Such variations should be determined effectively and eliminated from the process by taking the necessary corrective actions to maintain the process in control and improve the quality of the products as well. In this study, a multilayered neural network trained with a back propagation algorithm was applied to pattern recognition on control charts. The neural network was experimented on a set of generated data
    corecore