10,786 research outputs found
Autoencoders and Generative Adversarial Networks for Imbalanced Sequence Classification
Generative Adversarial Networks (GANs) have been used in many different
applications to generate realistic synthetic data. We introduce a novel GAN
with Autoencoder (GAN-AE) architecture to generate synthetic samples for
variable length, multi-feature sequence datasets. In this model, we develop a
GAN architecture with an additional autoencoder component, where recurrent
neural networks (RNNs) are used for each component of the model in order to
generate synthetic data to improve classification accuracy for a highly
imbalanced medical device dataset. In addition to the medical device dataset,
we also evaluate the GAN-AE performance on two additional datasets and
demonstrate the application of GAN-AE to a sequence-to-sequence task where both
synthetic sequence inputs and sequence outputs must be generated. To evaluate
the quality of the synthetic data, we train encoder-decoder models both with
and without the synthetic data and compare the classification model
performance. We show that a model trained with GAN-AE generated synthetic data
outperforms models trained with synthetic data generated both with standard
oversampling techniques such as SMOTE and Autoencoders as well as with state of
the art GAN-based models
Recommended from our members
Verifying baselines for crisis event information classification on Twitter
Social media are rich information sources during and in the aftermath of crisis events such as earthquakes and terrorist attacks. Despite myriad challenges, with the right tools, significant insight can be gained which can assist emergency responders and related applications. However, most extant approaches are incomparable, using bespoke definitions, models, datasets and even evaluation metrics. Furthermore, it is rare that code, trained models, or exhaustive parametrisation details are made openly available. Thus, even confirmation of self-reported performance is problematic; authoritatively determining the state of the art (SOTA) is essentially impossible. Consequently, to begin addressing such endemic ambiguity, this paper seeks to make 3 contributions: 1) the replication and results confirmation of a leading (and generalisable) technique; 2) testing straightforward modifications of the technique likely to improve performance; and 3) the extension of the technique to a novel and complimentary type of crisis-relevant information to demonstrate it’s generalisability
Active learning in annotating micro-blogs dealing with e-reputation
Elections unleash strong political views on Twitter, but what do people
really think about politics? Opinion and trend mining on micro blogs dealing
with politics has recently attracted researchers in several fields including
Information Retrieval and Machine Learning (ML). Since the performance of ML
and Natural Language Processing (NLP) approaches are limited by the amount and
quality of data available, one promising alternative for some tasks is the
automatic propagation of expert annotations. This paper intends to develop a
so-called active learning process for automatically annotating French language
tweets that deal with the image (i.e., representation, web reputation) of
politicians. Our main focus is on the methodology followed to build an original
annotated dataset expressing opinion from two French politicians over time. We
therefore review state of the art NLP-based ML algorithms to automatically
annotate tweets using a manual initiation step as bootstrap. This paper focuses
on key issues about active learning while building a large annotated data set
from noise. This will be introduced by human annotators, abundance of data and
the label distribution across data and entities. In turn, we show that Twitter
characteristics such as the author's name or hashtags can be considered as the
bearing point to not only improve automatic systems for Opinion Mining (OM) and
Topic Classification but also to reduce noise in human annotations. However, a
later thorough analysis shows that reducing noise might induce the loss of
crucial information.Comment: Journal of Interdisciplinary Methodologies and Issues in Science -
Vol 3 - Contextualisation digitale - 201
Theoretical and methodological advances in semi-supervised learning and the class-imbalance problem.
201 p.Este trabajo se centra en la generalización teórica y práctica de dos situaciones desafiantes y conocidas del campo del aprendizaje automático a problemas de clasificación en los cuales la suposición de tener una única clase binaria no se cumple.Aprendizaje semi-supervisado es una técnica que usa grandes cantidades de datos no etiquetados para, así, mejorar el rendimiento del aprendizaje supervisado cuando el conjunto de datos etiquetados es muy acotado. Concretamente, este trabajo contribuye con metodologías potentes y computacionalmente eficientes para aprender, de forma semi-supervisada, clasificadores para múltiples variables clase. También, se investigan, de forma teórica, los límites fundamentales del aprendizaje semi-supervisado en problemas multiclase.El problema de desbalanceo de clases aparece cuando las variables objetivo presentan una distribución de probabilidad lo suficientemente desbalanceada como para desvirtuar las soluciones propuestas por los algoritmos de aprendizaje supervisado tradicionales. En este proyecto, se propone un marco teórico para separar la desvirtuación producida por el desbalanceo de clases de otros factores que afectan a la precisión de los clasificadores. Este marco es usado principalmente para realizar una recomendación de métricas de evaluación de clasificadores en esta situación. Por último, también se propone una medida del grado de desbalanceo de clases en un conjunto de datos correlacionada con la pérdida de precisión ocasionada.Intelligent Systems Grou
Theoretical and Methodological Advances in Semi-supervised Learning and the Class-Imbalance Problem
his paper focuses on the theoretical and practical generalization of two known and challenging situations from the field of machine learning to classification problems in which the assumption of having a single binary class is not fulfilled.semi-supervised learning is a technique that uses large amounts of unlabeled data to improve the performance of supervised learning when the labeled data set is very limited. Specifically, this work contributes with powerful and computationally efficient methodologies to learn, in a semi-supervised way, classifiers for multiple class variables. Also, the fundamental limits of semi-supervised learning in multi-class problems are investigated in a theoretical way. The problem of class unbalance appears when the target variables present a probability distribution unbalanced enough to distort the solutions proposed by the traditional supervised learning algorithms. In this project, a theoretical framework is proposed to separate the deviation produced by class unbalance from other factors that affect the accuracy of classifiers. This framework is mainly used to make a recommendation of classifier assessment metrics in this situation. Finally, a measure of the degree of class unbalance in a data set correlated with the loss of accuracy caused is also proposed
Predicting the Need for Urgent Instructor Intervention in MOOC Environments
In recent years, massive open online courses (MOOCs) have become universal knowledge resources and arguably one of the most exciting innovations in e-learning environments. MOOC platforms comprise numerous courses covering a wide range of subjects and domains. Thousands of learners around the world enrol on these online platforms to satisfy their learning needs (mostly) free of charge. However, the retention rates of MOOC courses (i.e., those who successfully complete a course of study) are low (around 10% on average); dropout rates tend to be very high (around 90%). The principal channel via which MOOC learners can communicate their difficulties with the learning content and ask for assistance from instructors is by posting in a dedicated MOOC forum. Importantly, in the case of learners who are suffering from burnout or stress, some of these posts require urgent intervention.
Given the above, urgent instructor intervention regarding learner requests for assistance via posts made on MOOC forums has become an important topic for research among researchers. Timely intervention by MOOC instructors may mitigate dropout issues and make the difference between a learner dropping out or staying on a course. However, due to the typically extremely high learner-to-instructor ratio in MOOCs and the often-huge numbers of posts on forums, while truly urgent posts are rare, managing them can be very challenging –– if not sometimes impossible. Instructors can find it challenging to monitor all existing posts and identify which posts require immediate intervention to help learners, encourage retention, and reduce the current high dropout rates.
The main objective of this research project, therefore, was thus to mine and analyse learners’ MOOC posts as a fundamental step towards understanding their need for instructor intervention. To achieve this, the researcher proposed and built comprehensive classification models to predict the need for instructor intervention. The ultimate goal is to help instructors by guiding them to posts, topics, and learners that require immediate interventions.
Given the above research aim the researcher conducted different experiments to fill the gap in literature based on different platform datasets (the FutureLearn platform and the Stanford MOOCPosts dataset) in terms of the former, three MOOC corpora were prepared: two of them gold-standard MOOC corpora to identify urgent posts, annotated by selected experts in the field; the third is a corpus detailing learner dropout. Based in these datasets, different architectures and classification models based on traditional machine learning, and deep learning approaches were proposed.
In this thesis, the task of determining the need for instructor intervention was tackled from three perspectives: (i) identifying relevant posts, (ii) identifying relevant topics, and (iii) identifying relevant learners. Posts written by learners were classified into two categories: (i) (urgent) intervention and (ii) (non-urgent) intervention. Also, learners were classified into: (i) requiring instructor intervention (at risk of dropout) and (ii) no need for instructor intervention (completer).
In identifying posts, two experiments were used to contribute to this field. The first is a novel classifier based on a deep learning model that integrates novel MOOC post dimensions such as numerical data in addition to textual data; this represents a novel contribution to the literature as all available models at the time of writing were based on text-only. The results demonstrate that the combined, multidimensional features model proposed in this project is more effective than the text-only model. The second contribution relates to creating various simple and hybrid deep learning models by applying plug & play techniques with different types of inputs (word-based or word-character-based) and different ways of representing target input words as vector representations of a particular word. According to the experimental findings, employing Bidirectional Encoder Representations from Transformers (BERT) for word embedding rather than word2vec as the former is more effective at the intervention task than the latter across all models. Interestingly, adding word-character inputs with BERT does not improve performance as it does for word2vec. Additionally, on the task of identifying topics, this is the first time in the literature that specific language terms to identify the need for urgent intervention in MOOCs were obtained. This was achieved by analysing learner MOOC posts using latent Dirichlet allocation (LDA) and offers a visualisation tool for instructors or learners that may assist them and improve instructor intervention. In addition, this thesis contributes to the literature by creating mechanisms for identifying MOOC learners who may need instructor intervention in a new context, i.e., by using their historical online forum posts as a multi-input approach for other deep learning architectures and Transformer models. The findings demonstrate that using the Transformer model is more effective at identifying MOOC learners who require instructor intervention.
Next, the thesis sought to expand its methodology to identify posts that relate to learner behaviour, which is also a novel contribution, by proposing a novel priority model to identify the urgency of intervention building based on learner histories. This model can classify learners into three groups: low risk, mid risk, and high risk. The results show that the completion rates of high-risk learners are very low, which confirms the importance of this model. Next, as MOOC data in terms of urgent posts tend to be highly unbalanced, the thesis contributes by examining various data balancing methods to spot situations in which MOOC posts urgently require instructor assistance. This included developing learner and instructor models to assist instructors to respond to urgent MOOCs posts. The results show that models with undersampling can predict the most urgent cases; 3x augmentation + undersampling usually attains the best performance. Finally, for the first time, this thesis contributes to the literature by applying text classification explainability (eXplainable Artificial Intelligence (XAI)) to an instructor intervention model, demonstrating how using a reliable predictor in combination with XAI and colour-coded visualisation could be utilised to assist instructors in deciding when posts require urgent intervention, as well as supporting annotators to create high-quality, gold-standard datasets to determine posts cases where urgent intervention is required
Can We Use SE-specific Sentiment Analysis Tools in a Cross-Platform Setting?
In this paper, we address the problem of using sentiment analysis tools
'off-the-shelf,' that is when a gold standard is not available for retraining.
We evaluate the performance of four SE-specific tools in a cross-platform
setting, i.e., on a test set collected from data sources different from the one
used for training. We find that (i) the lexicon-based tools outperform the
supervised approaches retrained in a cross-platform setting and (ii) retraining
can be beneficial in within-platform settings in the presence of robust gold
standard datasets, even using a minimal training set. Based on our empirical
findings, we derive guidelines for reliable use of sentiment analysis tools in
software engineering.Comment: 12 page
- …