8 research outputs found

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Active Learning for Data Streams under Concept Drift and concept evolution.

    Get PDF
    Data streams classification is an important problem however, poses many challenges. Since the length of the data is theoretically infinite, it is impractical to store and process all the historical data. Data streams also experience change of its underlying dis-tribution (concept drift), thus the classifier must adapt. Another challenge of data stream classification is the possible emergence and disappearance of classes which is known as (concept evolution) problem. On the top of these challenges, acquiring labels with such large data is expensive. In this paper, we propose a stream-based active learning (AL) strategy (SAL) that handles the aforementioned challenges. SAL aims at querying the labels of samples which results in optimizing the expected future error. It handles concept drift and concept evolution by adapting to the change in the stream. Furthermore, as a part of the error reduction process, SAL handles the sampling bias problem and queries the samples that caused the change i.e., drifted samples or samples coming from new classes. To tackle the lack of prior knowledge about the streaming data, non-parametric Bayesian modelling is adopted namely the two representations of Dirichlet process; Dirichlet mixture models and stick breaking process. Empirical results obtained on real-world benchmarks show the high performance of the proposed SAL method compared to the state-of-the-art methods

    A Bi-Criteria Active Learning Algorithm for Dynamic Data Streams

    Get PDF
    Active learning (AL) is a promising way to efficiently building up training sets with minimal supervision. A learner deliberately queries specific instances to tune the classifier’s model using as few labels as possible. The challenge for streaming is that the data distribution may evolve over time and therefore the model must adapt. Another challenge is the sampling bias where the sampled training set does not reflect the underlying data distribution. In presence of concept drift, sampling bias is more likely to occur as the training set needs to represent the whole evolving data. To tackle these challenges, we propose a novel bi-criteria AL approach (BAL) that relies on two selection criteria, namely label uncertainty criterion and density-based cri- terion . While the first criterion selects instances that are the most uncertain in terms of class membership, the latter dynamically curbs the sampling bias by weighting the samples to reflect on the true underlying distribution. To design and implement these two criteria for learning from streams, BAL adopts a Bayesian online learning approach and combines online classification and online clustering through the use of online logistic regression and online growing Gaussian mixture models respectively. Empirical results obtained on standard synthetic and real-world benchmarks show the high performance of the proposed BAL method compared to the state-of-the-art AL method

    A Survey on Concept Drift Adaptation

    Get PDF
    Concept drift primarily refers to an online supervised learning scenario when the relation between the in- put data and the target variable changes over time. Assuming a general knowledge of supervised learning in this paper we characterize adaptive learning process, categorize existing strategies for handling concept drift, discuss the most representative, distinct and popular techniques and algorithms, discuss evaluation methodology of adaptive algorithms, and present a set of illustrative applications. This introduction to the concept drift adaptation presents the state of the art techniques and a collection of benchmarks for re- searchers, industry analysts and practitioners. The survey aims at covering the different facets of concept drift in an integrated way to reflect on the existing scattered state-of-the-art

    Active Online Learning for Social Media Analysis to Support Crisis Management

    Get PDF
    People use social media (SM) to describe and discuss different situations they are involved in, like crises. It is therefore worthwhile to exploit SM contents to support crisis management, in particular by revealing useful and unknown information about the crises in real-time. Hence, we propose a novel active online multiple-prototype classifier, called AOMPC. It identifies relevant data related to a crisis. AOMPC is an online learning algorithm that operates on data streams and which is equipped with active learning mechanisms to actively query the label of ambiguous unlabeled data. The number of queries is controlled by a fixed budget strategy. Typically, AOMPC accommodates partly labeled data streams. AOMPC was evaluated using two types of data: (1) synthetic data and (2) SM data from Twitter related to two crises, Colorado Floods and Australia Bushfires. To provide a thorough evaluation, a whole set of known metrics was used to study the quality of the results. Moreover, a sensitivity analysis was conducted to show the effect of AOMPC’s parameters on the accuracy of the results. A comparative study of AOMPC against other available online learning algorithms was performed. The experiments showed very good behavior of AOMPC for dealing with evolving, partly labeled data streams

    Active Learning for Classifying Data Streams with Unknown Number of Classes.

    Get PDF
    The classification of data streams is an interesting but also a challenging problem. A data stream may grow infinitely making it impractical for storage prior to processing and classification. Due to its dynamic nature, the underlying distribution of the data stream may change over time resulting in the so-called concept drift or the possible emergence and fading of classes, known as concept evolution. In addition, acquiring labels of data samples in a stream is admittedly expensive if not infeasible at all. In this paper, we propose a novel stream-based active learning algorithm (SAL) which is capable of coping with both concept drift and concept evolution by adapting the classification model to the dynamic changes in the stream. SAL is the first AL algorithm in the literature to explicitly take account of these concepts. Moreover, using SAL, only labels of samples that are expected to reduce the expected future error are queried. This process is done while tackling the problem of sampling bias so that samples that induce the change (i.e., drifting samples or samples coming from new classes) are queried. To efficiently implement SAL, the paper proposes the application of non parametric Bayesian models allowing to cope with the lack of prior knowledge about the data stream. In particular, Dirichlet mixture models and the stick breaking process are adopted and adapted to meet the requirements of online learning. The empirical results obtained on real-world benchmarks demonstrate the superiority of SAL in terms of classification performance over the state-of-the-art methods using average and average class accuracy

    Activity Recognition with Evolving Data Streams: A Review

    Get PDF
    Activity recognition aims to provide accurate and opportune information on people’s activities by leveraging sensory data available in today’s sensory rich environments. Nowadays, activity recognition has become an emerging field in the areas of pervasive and ubiquitous computing. A typical activity recognition technique processes data streams that evolve from sensing platforms such as mobile sensors, on body sensors, and/or ambient sensors. This paper surveys the two overlapped areas of research of activity recognition and data stream mining. The perspective of this paper is to review the adaptation capabilities of activity recognition techniques in streaming environment. Categories of techniques are identified based on different features in both data streams and activity recognition. The pros and cons of the algorithms in each category are analysed and the possible directions of future research are indicated

    Detecting Students At-Risk Using Learning Analytics

    Get PDF
    The issue of supporting struggling tertiary students has been a long-standing concern in academia. Universities are increasingly devoting resources to supporting underperforming students, to enhance each student’s ability to achieve better academic performance, alongside boosting retention rates. However, identifying such students represents a heavy workload for educators, given the significant increases in tertiary student numbers over the past decade. Utilising the power of learning analytic approaches can help to address this problem by analysing diverse students' characteristics in order to identify underperforming students. Automated, early detection of students who are at potential risk of failing or dropping out of academic courses enhances the lecturers' capacity to supply timely and proactive interventions with minimal effort, and thereby ultimately improve university outcomes. This thesis focuses on the early detection of struggling students in blended learning settings, based on their online learning activities. Online learning data were used to extract a wide range of online learning characteristics using diverse quantitative, social and qualitative analysis approaches, including developing an automated mechanism to weight sentiments expressed in post messages, using combinations of adverbs, strengths. The extracted variables are used to predict academic performance in timely manner. The particular interest of this thesis is on providing accurate, early predictions of students’ academic risk. Hence, we proposed a novel Grey Zone design to enhance the quality of binary predictive instruments, where the experimental results illustrate its positive overall impact on the predictive models, performances. The experimental results indicate that utilising the Grey Zone design improves prediction-accuracy by up to 25 percent when compared with other commonly-used prediction strategies. Furthermore, this thesis involves developing an exemplar multi-course early warning framework for academically at-risk students on a weekly basis. The predictive framework relies on online learning characteristics to detect struggling students, from which was developed the Grey Zone design. In addition, the multi-course framework was evaluated using a set of unseen datasets drawn from four diverse courses (N = 319) to determine its performance in a real-life situation, alongside identifying the optimal time to start the student interventions. The experimental results show the framework’s ability to provide early, quality predictions, where it achieved over 0.92 AUC points across most of the evaluated courses. The framework's predictivity analysis indicates that week 3 is the optimal week to establish support interventions. Moreover, within this thesis, an adaptive framework and algorithms were developed to allow the underlying predictive instrument to cope with any changes that may occur due to dynamic changes in the prediction concept. The adaptive framework and algorithms are designed to be applied with a predictive instrument developed for the multi-course framework. The developed adaptive strategy was evaluated over two adaptive scenarios, with and without utilising a forgetting mechanism for historical instances. The results show the ability of the proposed adaptive strategy to enhance the performance of updated predictive instruments when compared with the performance of an unupdated, static baseline model. Utilising a forgetting mechanism for historical data instances led the system to achieve significantly faster and better adaptation outcomes.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201
    corecore