753 research outputs found

    An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams

    Full text link
    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System

    Multi-view Fuzzy Representation Learning with Rules based Model

    Full text link
    Unsupervised multi-view representation learning has been extensively studied for mining multi-view data. However, some critical challenges remain. On the one hand, the existing methods cannot explore multi-view data comprehensively since they usually learn a common representation between views, given that multi-view data contains both the common information between views and the specific information within each view. On the other hand, to mine the nonlinear relationship between data, kernel or neural network methods are commonly used for multi-view representation learning. However, these methods are lacking in interpretability. To this end, this paper proposes a new multi-view fuzzy representation learning method based on the interpretable Takagi-Sugeno-Kang (TSK) fuzzy system (MVRL_FS). The method realizes multi-view representation learning from two aspects. First, multi-view data are transformed into a high-dimensional fuzzy feature space, while the common information between views and specific information of each view are explored simultaneously. Second, a new regularization method based on L_(2,1)-norm regression is proposed to mine the consistency information between views, while the geometric structure of the data is preserved through the Laplacian graph. Finally, extensive experiments on many benchmark multi-view datasets are conducted to validate the superiority of the proposed method.Comment: This work has been accepted by IEEE Transactions on Knowledge and Data Engineerin

    A Robust Multilabel Method Integrating Rule-based Transparent Model, Soft Label Correlation Learning and Label Noise Resistance

    Full text link
    Model transparency, label correlation learning and the robust-ness to label noise are crucial for multilabel learning. However, few existing methods study these three characteristics simultaneously. To address this challenge, we propose the robust multilabel Takagi-Sugeno-Kang fuzzy system (R-MLTSK-FS) with three mechanisms. First, we design a soft label learning mechanism to reduce the effect of label noise by explicitly measuring the interactions between labels, which is also the basis of the other two mechanisms. Second, the rule-based TSK FS is used as the base model to efficiently model the inference relationship be-tween features and soft labels in a more transparent way than many existing multilabel models. Third, to further improve the performance of multilabel learning, we build a correlation enhancement learning mechanism based on the soft label space and the fuzzy feature space. Extensive experiments are conducted to demonstrate the superiority of the proposed method.Comment: This paper has been accepted by IEEE Transactions on Fuzzy System

    Fuzzy Rule-Based Domain Adaptation in Homogeneous and Heterogeneous Spaces

    Full text link
    © 2018 IEEE. Domain adaptation aims to leverage knowledge acquired from a related domain (called a source domain) to improve the efficiency of completing a prediction task (classification or regression) in the current domain (called the target domain), which has a different probability distribution from the source domain. Although domain adaptation has been widely studied, most existing research has focused on homogeneous domain adaptation, where both domains have identical feature spaces. Recently, a new challenge proposed in this area is heterogeneous domain adaptation where both the probability distributions and the feature spaces are different. Moreover, in both homogeneous and heterogeneous domain adaptation, the greatest efforts and major achievements have been made with classification tasks, while successful solutions for tackling regression problems are limited. This paper proposes two innovative fuzzy rule-based methods to deal with regression problems. The first method, called fuzzy homogeneous domain adaptation, handles homogeneous spaces while the second method, called fuzzy heterogeneous domain adaptation, handles heterogeneous spaces. Fuzzy rules are first generated from the source domain through a learning process; these rules, also known as knowledge, are then transferred to the target domain by establishing a latent feature space to minimize the gap between the feature spaces of the two domains. Through experiments on synthetic datasets, we demonstrate the effectiveness of both methods and discuss the impact of some of the significant parameters that affect performance. Experiments on real-world datasets also show that the proposed methods improve the performance of the target model over an existing source model or a model built using a small amount of target data
    • …
    corecore