980 research outputs found
COMPOSE: Compacted object sample extraction a framework for semi-supervised learning in nonstationary environments
An increasing number of real-world applications are associated with streaming data drawn from drifting and nonstationary distributions. These applications demand new algorithms that can learn and adapt to such changes, also known as concept drift. Proper characterization of such data with existing approaches typically requires substantial amount of labeled instances, which may be difficult, expensive, or even impractical to obtain. In this thesis, compacted object sample extraction (COMPOSE) is introduced - a computational geometry-based framework to learn from nonstationary streaming data - where labels are unavailable (or presented very sporadically) after initialization. The feasibility and performance of the algorithm are evaluated on several synthetic and real-world data sets, which present various different scenarios of initially labeled streaming environments. On carefully designed synthetic data sets, we also compare the performance of COMPOSE against the optimal Bayes classifier, as well as the arbitrary subpopulation tracker algorithm, which addresses a similar environment referred to as extreme verification latency. Furthermore, using the real-world National Oceanic and Atmospheric Administration weather data set, we demonstrate that COMPOSE is competitive even with a well-established and fully supervised nonstationary learning algorithm that receives labeled data in every batch
Accumulating regional density dissimilarity for concept drift detection in data streams
© 2017 Elsevier Ltd In a non-stationary environment, newly received data may have different knowledge patterns from the data used to train learning models. As time passes, a learning model's performance may become increasingly unreliable. This problem is known as concept drift and is a common issue in real-world domains. Concept drift detection has attracted increasing attention in recent years. However, very few existing methods pay attention to small regional drifts, and their accuracy may vary due to differing statistical significance tests. This paper presents a novel concept drift detection method, based on regional-density estimation, named nearest neighbor-based density variation identification (NN-DVI). It consists of three components. The first is a k-nearest neighbor-based space-partitioning schema (NNPS), which transforms unmeasurable discrete data instances into a set of shared subspaces for density estimation. The second is a distance function that accumulates the density discrepancies in these subspaces and quantifies the overall differences. The third component is a tailored statistical significance test by which the confidence interval of a concept drift can be accurately determined. The distance applied in NN-DVI is sensitive to regional drift and has been proven to follow a normal distribution. As a result, the NN-DVI's accuracy and false-alarm rate are statistically guaranteed. Additionally, several benchmarks have been used to evaluate the method, including both synthetic and real-world datasets. The overall results show that NN-DVI has better performance in terms of addressing problems related to concept drift-detection
OEBench: Investigating Open Environment Challenges in Real-World Relational Data Streams
How to get insights from relational data streams in a timely manner is a hot
research topic. This type of data stream can present unique challenges, such as
distribution drifts, outliers, emerging classes, and changing features, which
have recently been described as open environment challenges for machine
learning. While existing studies have been done on incremental learning for
data streams, their evaluations are mostly conducted with manually partitioned
datasets. Thus, a natural question is how those open environment challenges
look like in real-world relational data streams and how existing incremental
learning algorithms perform on real datasets. To fill this gap, we develop an
Open Environment Benchmark named OEBench to evaluate open environment
challenges in relational data streams. Specifically, we investigate 55
real-world relational data streams and establish that open environment
scenarios are indeed widespread in real-world datasets, which presents
significant challenges for stream learning algorithms. Through benchmarks with
existing incremental learning algorithms, we find that increased data quantity
may not consistently enhance the model accuracy when applied in open
environment scenarios, where machine learning models can be significantly
compromised by missing values, distribution shifts, or anomalies in real-world
data streams. The current techniques are insufficient in effectively mitigating
these challenges posed by open environments. More researches are needed to
address real-world open environment challenges. All datasets and code are
open-sourced in https://github.com/sjtudyq/OEBench
Scalable Teacher Forcing Network for Semi-Supervised Large Scale Data Streams
The large-scale data stream problem refers to high-speed information flow
which cannot be processed in scalable manner under a traditional computing
platform. This problem also imposes expensive labelling cost making the
deployment of fully supervised algorithms unfeasible. On the other hand, the
problem of semi-supervised large-scale data streams is little explored in the
literature because most works are designed in the traditional single-node
computing environments while also being fully supervised approaches. This paper
offers Weakly Supervised Scalable Teacher Forcing Network (WeScatterNet) to
cope with the scarcity of labelled samples and the large-scale data streams
simultaneously. WeScatterNet is crafted under distributed computing platform of
Apache Spark with a data-free model fusion strategy for model compression after
parallel computing stage. It features an open network structure to address the
global and local drift problems while integrating a data augmentation,
annotation and auto-correction () method for handling partially labelled
data streams. The performance of WeScatterNet is numerically evaluated in the
six large-scale data stream problems with only label proportions. It
shows highly competitive performance even if compared with fully supervised
learners with label proportions.Comment: This paper has been accepted for publication in Information Science
DynED: Dynamic Ensemble Diversification in Data Stream Classification
Ensemble methods are commonly used in classification due to their remarkable
performance. Achieving high accuracy in a data stream environment is a
challenging task considering disruptive changes in the data distribution, also
known as concept drift. A greater diversity of ensemble components is known to
enhance prediction accuracy in such settings. Despite the diversity of
components within an ensemble, not all contribute as expected to its overall
performance. This necessitates a method for selecting components that exhibit
high performance and diversity. We present a novel ensemble construction and
maintenance approach based on MMR (Maximal Marginal Relevance) that dynamically
combines the diversity and prediction accuracy of components during the process
of structuring an ensemble. The experimental results on both four real and 11
synthetic datasets demonstrate that the proposed approach (DynED) provides a
higher average mean accuracy compared to the five state-of-the-art baselines.Comment: Proceedings of the 32nd ACM International Conference on Information
and Knowledge Management (CIKM '23), October 21--25, 2023, Birmingham, United
Kingdo
- …