38,147 research outputs found

    Combining diverse neural nets

    Get PDF
    An appropriate use of neural computing techniques is to apply them to problems such as condition monitoring, fault diagnosis, control and sensing, where conventional solutions can be hard to obtain. However, when neural computing techniques are used, it is important that they are employed so as to maximise their performance, and improve their reliability. Their performance is typically assessed in terms of their ability to generalise to a previously unseen test set, although unless the training set is very carefully chosen, 100% accuracy is rarely achieved. Improved performance can result when sets of neural nets are combined in ensembles and ensembles can be viewed as an example of the reliability through redundancy approach that is recommended for conventional software and hardware in safety-critical or safety-related applications. Although there has been recent interest in the use of neural net ensembles, such techniques have yet to be applied to the tasks of condition monitoring and fault diagnosis. In this paper, we focus on the benefits of techniques which promote diversity amongst the members of an ensemble, such that there is a minimum number of coincident failures. The concept of ensemble diversity is considered in some detail, and a hierarchy of four levels of diversity is presented. This hierarchy is then used in the description of the application of ensemble-based techniques to the case study of fault diagnosis of a diesel engine

    Using fuzzy logic to integrate neural networks and knowledge-based systems

    Get PDF
    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems

    Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks

    Full text link
    Effective training of deep neural networks suffers from two main issues. The first is that the parameter spaces of these models exhibit pathological curvature. Recent methods address this problem by using adaptive preconditioning for Stochastic Gradient Descent (SGD). These methods improve convergence by adapting to the local geometry of parameter space. A second issue is overfitting, which is typically addressed by early stopping. However, recent work has demonstrated that Bayesian model averaging mitigates this problem. The posterior can be sampled by using Stochastic Gradient Langevin Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD methods inefficient. Here, we propose combining adaptive preconditioners with SGLD. In support of this idea, we give theoretical properties on asymptotic convergence and predictive risk. We also provide empirical results for Logistic Regression, Feedforward Neural Nets, and Convolutional Neural Nets, demonstrating that our preconditioned SGLD method gives state-of-the-art performance on these models.Comment: AAAI 201

    Public channel cryptography by synchronization of neural networks and chaotic maps

    Full text link
    Two different kinds of synchronization have been applied to cryptography: Synchronization of chaotic maps by one common external signal and synchronization of neural networks by mutual learning. By combining these two mechanisms, where the external signal to the chaotic maps is synchronized by the nets, we construct a hybrid network which allows a secure generation of secret encryption keys over a public channel. The security with respect to attacks, recently proposed by Shamir et al, is increased by chaotic synchronization.Comment: 4 page
    • …
    corecore