55,885 research outputs found

    Fuzzy jump wavelet neural network based on rule induction for dynamic nonlinear system identification with real data applications

    Get PDF
    Aim Fuzzy wavelet neural network (FWNN) has proven to be a promising strategy in the identification of nonlinear systems. The network considers both global and local properties, deals with imprecision present in sensory data, leading to desired precisions. In this paper, we proposed a new FWNN model nominated “Fuzzy Jump Wavelet Neural Network” (FJWNN) for identifying dynamic nonlinear-linear systems, especially in practical applications. Methods The proposed FJWNN is a fuzzy neural network model of the Takagi-Sugeno-Kang type whose consequent part of fuzzy rules is a linear combination of input regressors and dominant wavelet neurons as a sub-jump wavelet neural network. Each fuzzy rule can locally model both linear and nonlinear properties of a system. The linear relationship between the inputs and the output is learned by neurons with linear activation functions, whereas the nonlinear relationship is locally modeled by wavelet neurons. Orthogonal least square (OLS) method and genetic algorithm (GA) are respectively used to purify the wavelets for each sub-JWNN. In this paper, fuzzy rule induction improves the structure of the proposed model leading to less fuzzy rules, inputs of each fuzzy rule and model parameters. The real-world gas furnace and the real electromyographic (EMG) signal modeling problem are employed in our study. In the same vein, piecewise single variable function approximation, nonlinear dynamic system modeling, and Mackey–Glass time series prediction, ratify this method superiority. The proposed FJWNN model is compared with the state-of-the-art models based on some performance indices such as RMSE, RRSE, Rel ERR%, and VAF%. Results The proposed FJWNN model yielded the following results: RRSE (mean±std) of 10e-5±6e-5 for piecewise single-variable function approximation, RMSE (mean±std) of 2.6–4±2.6e-4 for the first nonlinear dynamic system modelling, RRSE (mean±std) of 1.59e-3±0.42e-3 for Mackey–Glass time series prediction, RMSE of 0.3421 for gas furnace modelling and VAF% (mean±std) of 98.24±0.71 for the EMG modelling of all trial signals, indicating a significant enhancement over previous methods. Conclusions The FJWNN demonstrated promising accuracy and generalization while moderating network complexity. This improvement is due to applying main useful wavelets in combination with linear regressors and using fuzzy rule induction. Compared to the state-of-the-art models, the proposed FJWNN yielded better performance and, therefore, can be considered a novel tool for nonlinear system identificationPeer ReviewedPostprint (published version

    Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies

    Full text link
    An algorithm that learns from a set of examples should ideally be able to exploit the available resources of (a) abundant computing power and (b) domain-specific knowledge to improve its ability to generalize. Connectionist theory-refinement systems, which use background knowledge to select a neural network's topology and initial weights, have proven to be effective at exploiting domain-specific knowledge; however, most do not exploit available computing power. This weakness occurs because they lack the ability to refine the topology of the neural networks they produce, thereby limiting generalization, especially when given impoverished domain theories. We present the REGENT algorithm which uses (a) domain-specific knowledge to help create an initial population of knowledge-based neural networks and (b) genetic operators of crossover and mutation (specifically designed for knowledge-based networks) to continually search for better network topologies. Experiments on three real-world domains indicate that our new algorithm is able to significantly increase generalization compared to a standard connectionist theory-refinement system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file

    Dependency Grammar Induction with Neural Lexicalization and Big Training Data

    Full text link
    We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction. We experimented with L-DMV, a lexicalized version of Dependency Model with Valence and L-NDMV, our lexicalized extension of the Neural Dependency Model with Valence. We find that L-DMV only benefits from very small degrees of lexicalization and moderate sizes of training corpora. L-NDMV can benefit from big training data and lexicalization of greater degrees, especially when enhanced with good model initialization, and it achieves a result that is competitive with the current state-of-the-art.Comment: EMNLP 201

    Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning

    Full text link
    This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word ``line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems.Comment: 10 page
    • 

    corecore