1,703 research outputs found

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    Heuristic pattern correction scheme using adaptively trained generalized regression neural networks

    Get PDF
    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studie

    Domain Adaptation Extreme Learning Machines for Drift Compensation in E-nose Systems

    Full text link
    This paper addresses an important issue, known as sensor drift that behaves a nonlinear dynamic property in electronic nose (E-nose), from the viewpoint of machine learning. Traditional methods for drift compensation are laborious and costly due to the frequent acquisition and labeling process for gases samples recalibration. Extreme learning machines (ELMs) have been confirmed to be efficient and effective learning techniques for pattern recognition and regression. However, ELMs primarily focus on the supervised, semi-supervised and unsupervised learning problems in single domain (i.e. source domain). To our best knowledge, ELM with cross-domain learning capability has never been studied. This paper proposes a unified framework, referred to as Domain Adaptation Extreme Learning Machine (DAELM), which learns a robust classifier by leveraging a limited number of labeled data from target domain for drift compensation as well as gases recognition in E-nose systems, without loss of the computational efficiency and learning ability of traditional ELM. In the unified framework, two algorithms called DAELM-S and DAELM-T are proposed for the purpose of this paper, respectively. In order to percept the differences among ELM, DAELM-S and DAELM-T, two remarks are provided. Experiments on the popular sensor drift data with multiple batches collected by E-nose system clearly demonstrate that the proposed DAELM significantly outperforms existing drift compensation methods without cumbersome measures, and also bring new perspectives for ELM.Comment: 11 pages, 9 figures, to appear in IEEE Transactions on Instrumentation and Measuremen

    Incremental learning with respect to new incoming input attributes

    Get PDF
    Neural networks are generally exposed to a dynamic environment where the training patterns or the input attributes (features) will likely be introduced into the current domain incrementally. This paper considers the situation where a new set of input attributes must be considered and added into the existing neural network. The conventional method is to discard the existing network and redesign one from scratch. This approach wastes the old knowledge and the previous effort. In order to reduce computational time, improve generalization accuracy, and enhance intelligence of the learned models, we present ILIA algorithms (namely ILIA1, ILIA2, ILIA3, ILIA4 and ILIA5) capable of Incremental Learning in terms of Input Attributes. Using the ILIA algorithms, when new input attributes are introduced into the original problem, the existing neural network can be retained and a new sub-network is constructed and trained incrementally. The new sub-network and the old one are merged later to form a new network for the changed problem. In addition, ILIA algorithms have the ability to decide whether the new incoming input attributes are relevant to the output and consistent with the existing input attributes or not and suggest to accept or reject them. Experimental results show that the ILIA algorithms are efficient and effective both for the classification and regression problems

    Sequential RBF function estimator: memory regression network

    Get PDF
    The newal-network training algorithm can be divided into 2 categories: (I) Batch mode and (2) Sequential mode. In this paper, a novel online RBF network called "Memory Regression Network (MRN)" is proposed. Different from the previous approaches [2, 11], MRN involves two types of memories: Experience and Neuron, which handle short and long term memories respectively. By simulating human's learning behavior, a given function can be estimated without memorizing the whole training set. Two sets of function estimation experiments are examined in order to illustrate the performance of the proposed algorithm. The results show that MRN can effectively approximate the given function within a reasonable time and acceptable mean square error. © 2004 IEEE.published_or_final_versio

    Perpetual Learning Framework based on Type-2 Fuzzy Logic System for a Complex Manufacturing Process

    Get PDF
    This paper introduces a perpetual type-2 Neuro-Fuzzy modelling structure for continuous learning and its application to the complex thermo-mechanical metal process of steel Friction Stir Welding (FSW). The ‘perpetual’ property refers to the capability of the proposed system to continuously learn from new process data, in an incremental learning fashion. This is particularly important in industrial/manufacturing processes, as it eliminates the need to retrain the model in the presence of new data, or in the case of any process drift. The proposed structure evolves through incremental, hybrid (supervised/unsupervised) learning, and accommodates new sample data in a continuous fashion. The human-like information capture paradigm of granular computing is used along with an interval type-2 neural-fuzzy system to develop a modelling structure that is tolerant to the uncertainty in the manufacturing data (common challenge in industrial/manufacturing data). The proposed method relies on the creation of new fuzzy rules which are updated and optimised during the incremental learning process. An iterative pruning strategy in the model is then employed to remove any redundant rules, as a result of the incremental learning process. The rule growing/pruning strategy is used to guarantee that the proposed structure can be used in a perpetual learning mode. It is demonstrated that the proposed structure can effectively learn complex dynamics of input-output data in an adaptive way and maintain good predictive performance in the metal processing case study of steel FSW using real manufacturing dat
    corecore