2 research outputs found

    Automatic Detection of Emotions and Distress in Textual Data

    Get PDF
    Online data can be analyzed for many purposes, including the prediction of stock market, business, and political planning. Online data can also be used to develop systems for the automatic emotion detection and mental health assessment of users. These systems can be used as complementary measures in monitoring online forums by detecting users who are in need of attention. In this thesis, we first present a new approach for contextual emotion detection, i.e. emotion detection in short conversations. The approach is based on a neural feature extractor, composed of a recurrent neural network with an attention mechanism, followed by a final classifier, that can be neural or SVM-based. The results from our experiments showed that, by providing a higher and more robust performance, SVM can act as a better final classifier in comparison to a feed-forward neural network. We then extended our model for emotion detection, and created an ensemble approach for the task of distress detection from online data. This extended approach utilizes several attention-based neural sub-models to extract features and predict class probabilities, which are later used as input features to a Support Vector Machine (SVM) making the final classification. Our experiments show that using an ensemble approach which makes use different sub-models accessing diverse sources of information can improve classification in the absence of a large annotated dataset. The extended model was evaluated on two shared tasks, CLPsych and eRisk 2019, which aim at suicide risk assessment, and early risk detection of anorexia, respectively. The model ranked first in tasks A and C of CLPsych 2019 (with macro-average F1 scores of 0.481 and 0.268, respectively), and ranked first in the first task of eRisk 2019 in terms of F1 and latency-weighted F1 scores (0.71 and 0.69, respectively)

    Tune your brown clustering, please

    Get PDF
    Brown clustering, an unsupervised hierarchical clustering technique based on ngram mutual information, has proven useful in many NLP applications. However, most uses of Brown clustering employ the same default configuration; the appropriateness of this configuration has gone predominantly unexplored. Accordingly, we present information for practitioners on the behaviour of Brown clustering in order to assist hyper-parametre tuning, in the form of a theoretical model of Brown clustering utility. This model is then evaluated empirically in two sequence labelling tasks over two text types. We explore the dynamic between the input corpus size, chosen number of classes, and quality of the resulting clusters, which has an impact for any approach using Brown clustering. In every scenario that we examine, our results reveal that the values most commonly used for the clustering are sub-optimal
    corecore