29 research outputs found

    Learning from Ontology Streams with Semantic Concept Drift

    Get PDF
    Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing

    E-STRSAGA: an ensemble learning method to handle concept drift

    Get PDF
    We present E-STRSAGA, an ensemble learning algorithm, that can efficiently maintain a model over a stream of data points and recover from any type of drift that may happen in the underlying distribution. This algorithm adopts the new distribution by efficiently adding new experts after detecting any change in the performance of its model, and forgets about the previous distribution by efficient way of dropping old experts and data points from the old distribution. Experimental results are provided on a variety of drift rates and types (abrupt, gradual and multiple abrupt drifts). Results confirm the competitiveness of E-STRSAGA with a streaming data algorithm that knows when exactly drift happens and is able to restart its model and train it only over new distribution

    Concept Drift Detection in Data Stream Mining: The Review of Contemporary Literature

    Get PDF
    Mining process such as classification, clustering of progressive or dynamic data is a critical objective of the information retrieval and knowledge discovery; in particular, it is more sensitive in data stream mining models due to the possibility of significant change in the type and dimensionality of the data over a period. The influence of these changes over the mining process termed as concept drift. The concept drift that depict often in streaming data causes unbalanced performance of the mining models adapted. Hence, it is obvious to boost the mining models to predict and analyse the concept drift to achieve the performance at par best. The contemporary literature evinced significant contributions to handle the concept drift, which fall in to supervised, unsupervised learning, and statistical assessment approaches. This manuscript contributes the detailed review of the contemporary concept-drift detection models depicted in recent literature. The contribution of the manuscript includes the nomenclature of the concept drift models and their impact of imbalanced data tuples

    Adaptive Online Sequential ELM for Concept Drift Tackling

    Get PDF
    A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect underfitting condition.Comment: Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016), Article ID 8091267, 17 pages Received 29 January 2016, Accepted 17 May 2016. Special Issue on "Advances in Neural Networks and Hybrid-Metaheuristics: Theory, Algorithms, and Novel Engineering Applications". Academic Editor: Stefan Hauf

    PWIDB: A framework for learning to classify imbalanced data streams with incremental data re-balancing technique

    Get PDF
    The performance of classification algorithms with highly imbalanced streaming data depends upon efficient balancing strategy. Some techniques of balancing strategy have been applied using static batch data to resolve the class imbalance problem, which is difficult if applied for massive data streams. In this paper, a new Piece-Wise Incremental Data re-Balancing (PWIDB) framework is proposed. The PWIDB framework combines automated balancing techniques using Racing Algorithm (RA) and incremental rebalancing technique. RA is an active learning approach capable of classifying imbalanced data and can provide a way to select an appropriate re-balancing technique with imbalanced data. In this paper, we have extended the capability of RA for handling imbalanced data streams in the proposed PWIDB framework. The PWIDB accumulates previous knowledge with increments of re-balanced data and captures the concept of the imbalanced instances. The PWIDB is an incremental streaming batch framework, which is suitable for learning with streaming imbalanced data. We compared the performance of PWIDB with a well-known FLORA technique. Experimental results show that the PWIDB framework exhibits an improved and stable performance compared to FLORA and accumulative re-balancing techniques

    Resample-based Ensemble Framework for Drifting Imbalanced Data Streams

    Get PDF
    Machine learning in real-world scenarios is often challenged by concept drift and class imbalance. This paper proposes a Resample-based Ensemble Framework for Drifting Imbalanced Stream (RE-DI). The ensemble framework consists of a long-term static classifier to handle gradual and multiple dynamic classifiers to handle sudden concept drift. The weights of the ensemble classifier are adjusted from two aspects. First, a time-decayed strategy decreases the weights of the dynamic classifiers to make the ensemble classifier focus more on the new concept of the data stream. Second, a novel reinforcement mechanism is proposed to increase the weights of the base classifiers that perform better on the minority class and decrease the weights of the classifiers that perform worse. A resampling buffer is used for storing instances of the minority class to balance the imbalanced distribution over time. In our experiment, we compare the proposed method with other state-of-the-art algorithms on both real-world and synthetic data streams. The results show that the proposed method achieves the best performance in terms of both the Prequential AUC and accuracy
    corecore