2 research outputs found

    An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams

    Full text link
    Existing FNNs are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely gClass, drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of dimensionality of input space due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using seven datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System

    Recurrent error-based ridge polynomial neural networks for time series forecasting

    Get PDF
    Time series forecasting has attracted much attention due to its impact on many practical applications. Neural networks (NNs) have been attracting widespread interest as a promising tool for time series forecasting. The majority of NNs employ only autoregressive (AR) inputs (i.e., lagged time series values) when forecasting time series. Moving-average (MA) inputs (i.e., errors) however have not adequately considered. The use of MA inputs, which can be done by feeding back forecasting errors as extra network inputs, alongside AR inputs help to produce more accurate forecasts. Among numerous existing NNs architectures, higher order neural networks (HONNs), which have a single layer of learnable weights, were considered in this research work as they have demonstrated an ability to deal with time series forecasting and have an simple architecture. Based on two HONNs models, namely the feedforward ridge polynomial neural network (RPNN) and the recurrent dynamic ridge polynomial neural network (DRPNN), two recurrent error-based models were proposed. These models were called the ridge polynomial neural network with error feedback (RPNN-EF) and the ridge polynomial neural network with error-output feedbacks (RPNN-EOF). Extensive simulations covering ten time series were performed. Besides RPNN and DRPNN, a pi-sigma neural network and a Jordan pi-sigma neural network were used for comparison. Simulation results showed that introducing error feedback to the models lead to significant forecasting performance improvements. Furthermore, it was found that the proposed models outperformed many state-of-the-art models. It was concluded that the proposed models have the capability to efficiently forecast time series and that practitioners could benefit from using these forecasting models
    corecore