56,947 research outputs found
A survey on feature drift adaptation: Definition, benchmark, challenges and future directions
Data stream mining is a fast growing research topic due to the ubiquity of data in several real-world problems. Given their ephemeral nature, data stream sources are expected to undergo changes in data distribution, a phenomenon called concept drift. This paper focuses on one specific type of drift that has not yet been thoroughly studied, namely feature drift. Feature drift occurs whenever a subset of features becomes, or ceases to be, relevant to the learning task; thus, learners must detect and adapt to these changes accordingly. We survey existing work on feature drift adaptation with both explicit and implicit approaches. Additionally, we benchmark several algorithms and a naive feature drift detection approach using synthetic and real-world datasets. The results from our experiments indicate the need for future research in this area as even naive approaches produced gains in accuracy while reducing resources usage. Finally, we state current research topics, challenges and future directions for feature drift adaptation
Handling Concept Drift for Predictions in Business Process Mining
Predictive services nowadays play an important role across all business
sectors. However, deployed machine learning models are challenged by changing
data streams over time which is described as concept drift. Prediction quality
of models can be largely influenced by this phenomenon. Therefore, concept
drift is usually handled by retraining of the model. However, current research
lacks a recommendation which data should be selected for the retraining of the
machine learning model. Therefore, we systematically analyze different data
selection strategies in this work. Subsequently, we instantiate our findings on
a use case in process mining which is strongly affected by concept drift. We
can show that we can improve accuracy from 0.5400 to 0.7010 with concept drift
handling. Furthermore, we depict the effects of the different data selection
strategies
Adaptive Online Sequential ELM for Concept Drift Tackling
A machine learning method needs to adapt to over time changes in the
environment. Such changes are known as concept drift. In this paper, we propose
concept drift tackling method as an enhancement of Online Sequential Extreme
Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by
adding adaptive capability for classification and regression problem. The
scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme
that works well to handle real drift, virtual drift, and hybrid drift. The
AOS-ELM also works well for sudden drift and recurrent context change type. The
scheme is a simple unified method implemented in simple lines of code. We
evaluated AOS-ELM on regression and classification problem by using concept
drift public data set (SEA and STAGGER) and other public data sets such as
MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value
compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice
does not need hidden nodes increase, we address some issues related to the
increasing of the hidden nodes such as error condition and rank values. We
propose taking the rank of the pseudoinverse matrix as an indicator parameter
to detect underfitting condition.Comment: Hindawi Publishing. Computational Intelligence and Neuroscience
Volume 2016 (2016), Article ID 8091267, 17 pages Received 29 January 2016,
Accepted 17 May 2016. Special Issue on "Advances in Neural Networks and
Hybrid-Metaheuristics: Theory, Algorithms, and Novel Engineering
Applications". Academic Editor: Stefan Hauf
Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments
The feasibility of deep neural networks (DNNs) to address data stream
problems still requires intensive study because of the static and offline
nature of conventional deep learning approaches. A deep continual learning
algorithm, namely autonomous deep learning (ADL), is proposed in this paper.
Unlike traditional deep learning methods, ADL features a flexible structure
where its network structure can be constructed from scratch with the absence of
an initial network structure via the self-constructing network structure. ADL
specifically addresses catastrophic forgetting by having a different-depth
structure which is capable of achieving a trade-off between plasticity and
stability. Network significance (NS) formula is proposed to drive the hidden
nodes growing and pruning mechanism. Drift detection scenario (DDS) is put
forward to signal distributional changes in data streams which induce the
creation of a new hidden layer. The maximum information compression index
(MICI) method plays an important role as a complexity reduction module
eliminating redundant layers. The efficacy of ADL is numerically validated
under the prequential test-then-train procedure in lifelong environments using
nine popular data stream problems. The numerical results demonstrate that ADL
consistently outperforms recent continual learning methods while characterizing
the automatic construction of network structures
Dynamic Adaptation on Non-Stationary Visual Domains
Domain adaptation aims to learn models on a supervised source domain that
perform well on an unsupervised target. Prior work has examined domain
adaptation in the context of stationary domain shifts, i.e. static data sets.
However, with large-scale or dynamic data sources, data from a defined domain
is not usually available all at once. For instance, in a streaming data
scenario, dataset statistics effectively become a function of time. We
introduce a framework for adaptation over non-stationary distribution shifts
applicable to large-scale and streaming data scenarios. The model is adapted
sequentially over incoming unsupervised streaming data batches. This enables
improvements over several batches without the need for any additionally
annotated data. To demonstrate the effectiveness of our proposed framework, we
modify associative domain adaptation to work well on source and target data
batches with unequal class distributions. We apply our method to several
adaptation benchmark datasets for classification and show improved classifier
accuracy not only for the currently adapted batch, but also when applied on
future stream batches. Furthermore, we show the applicability of our
associative learning modifications to semantic segmentation, where we achieve
competitive results
- …