5,586 research outputs found

    Detecting expressions of blame or praise in text

    Get PDF
    The growth of social networking platforms has drawn a lot of attentions to the need for social computing. Social computing utilises human insights for computational tasks as well as design of systems that support social behaviours and interactions. One of the key aspects of social computing is the ability to attribute responsibility such as blame or praise to social events. This ability helps an intelligent entity account and understand other intelligent entities’ social behaviours, and enriches both the social functionalities and cognitive aspects of intelligent agents. In this paper, we present an approach with a model for blame and praise detection in text. We build our model based on various theories of blame and include in our model features used by humans determining judgment such as moral agent causality, foreknowledge, intentionality and coercion. An annotated corpus has been created for the task of blame and praise detection from text. The experimental results show that while our model gives similar results compared to supervised classifiers on classifying text as blame, praise or others, it outperforms supervised classifiers on more finer-grained classification of determining the direction of blame and praise, i.e., self-blame, blame-others, self-praise or praise-others, despite not using labelled training data

    Implicit emotion detection in text

    Get PDF
    In text, emotion can be expressed explicitly, using emotion-bearing words (e.g. happy, guilty) or implicitly without emotion-bearing words. Existing approaches focus on the detection of explicitly expressed emotion in text. However, there are various ways to express and convey emotions without the use of these emotion-bearing words. For example, given two sentences: “The outcome of my exam makes me happy” and “I passed my exam”, both sentences express happiness, with the first expressing it explicitly and the other implying it. In this thesis, we investigate implicit emotion detection in text. We propose a rule-based approach for implicit emotion detection, which can be used without labeled corpora for training. Our results show that our approach outperforms the lexicon matching method consistently and gives competitive performance in comparison to supervised classifiers. Given that emotions such as guilt and admiration which often require the identification of blameworthiness and praiseworthiness, we also propose an approach for the detection of blame and praise in text, using an adapted psychology model, Path model to blame. Lack of benchmarking dataset led us to construct a corpus containing comments of individuals’ emotional experiences annotated as blame, praise or others. Since implicit emotion detection might be useful for conflict-of-interest (CoI) detection in Wikipedia articles, we built a CoI corpus and explored various features including linguistic and stylometric, presentation, bias and emotion features. Our results show that emotion features are important when using Nave Bayes, but the best performance is obtained with SVM on linguistic and stylometric features only. Overall, we show that a rule-based approach can be used to detect implicit emotion in the absence of labelled data; it is feasible to adopt the psychology path model to blame for blame/praise detection from text, and implicit emotion detection is beneficial for CoI detection in Wikipedia articles

    Content-Based Conflict-of-Interest Detection on Wikipedia

    Get PDF
    Wikipedia is one of the most visited websites in the world. On Wikipedia, Conflict-of-Interest (CoI) editing happens when an editor uses Wikipedia to advance their interests or relationships. This includes paid editing done by organisations for public relations purposes, etc. CoI detection is highly subjective and though closely related to vandalism and bias detection, it is a more difficult problem. In this paper, we frame CoI detection as a binary classification problem and explore various features which can be used to train supervised classifiers for CoI detection on Wikipedia articles. Our experimental results show that the best F-measure achieved is 0.67 by training SVM from a combination of features including stylometric, bias and emotion features. As we are not certain that our non-CoI set does not contain any CoI articles, we have also explored the use of one-class classification for CoI detection. The results show that using stylometric features outperforms other types of features or a combination of them and gives an F-measure of 0.63. Also, while binary classifiers give higher recall values (0.81∌0.94), one-class classifier attains higher precision values (0.69∌0.74

    A Computational Study of Speech Acts in Social Media

    Get PDF
    Speech acts are expressed by humans in daily communication that perform an action (e.g. requesting, suggesting, promising, apologizing). Modeling speech acts is important for improving natural language understanding (i.e. human-computer interaction through computers’ comprehension of human language) and developing other natural language processing (NLP) tasks such as question answering and machine translation. Analyzing speech acts on large scale using computational methods could benefit linguists and social scientists in getting insights into human language and behavior. Speech acts such as suggesting, questioning and irony have aroused great attention in previous NLP research. However, two common speech acts, complaining and bragging, have remained under explored. Complaints are used to express a mismatch between reality and expectations towards an entity or event. Previous research has only focused on binary complaint identification (i.e. whether a social media post contains a complaint or not) using traditional machine learning models with feature engineering. Bragging is one of the most common ways of self-presentation, which aims to create a favorable image by disclosing positive statements about speakers or their in-group. Previous studies on bragging have been limited to manual analyses of small data sets, e.g. fewer than 300 posts. The main aim of this thesis is to enrich the study of speech acts in computational linguistics. First, we introduce the task of classifying complaint severity levels and propose a method for injecting external linguistic information into novel pretrained neural language models (e.g. BERT). We show that incorporating linguistic features is beneficial to complaint severity classification. We also improve the performance of binary complaint prediction with the help of complaint severity information in multi-task learning settings (i.e. jointly model these two tasks). Second, we introduce the task of identifying bragging and classifying their types as well as a new annotated data set. We analyze linguistic patterns of bragging and their types and present error analysis to identify model limitations. Finally, we examine the relationship between online bragging and a range of common socio-demographic factors including gender, age, education, income and popularity

    Responsibility in a World of Causes

    Get PDF
    A familiar chain of reasoning goes like this: if everything is caused, then no one is genuinely free; if no one is genuinely free, then no one can be morally responsible for anything; so if everything is caused, then no one can be morally responsible for anything. This paper will challenge the part of this reasoning that concerns moral responsibility. What is at stake for us when we ascribe moral responsibility to ourselves and others? This paper will argue that we can reconcile the idea of moral responsibility with a broadly scientific worldview

    v. 83, issue 11, February 11, 2016

    Get PDF

    Blaming the Refugees? Experimental Evidence On Responsibility Attribution

    Get PDF
    Do people blame refugees for negative events? We propose a novel experimental paradigm to measure discrimination in responsibility attribution towards Arabic refugees. Participants in the laboratory experience a positive or negative income shock, which is with equal probability caused by a random draw or another participant's performance in a real effort task. Responsibility attribution is measured by beliefs about whether the shock is due to the other participant's performance or the random draw. We find evidence for reverse discrimination: Natives attribute responsibility more favorably to refugees than to other natives. In particular, refugees are less often held responsible for negative income shocks. Moreover, natives with negative implicit associations towards Arabic names attribute responsibility less favorably to refugees than natives with positive associations. Since neither actual performance differences nor beliefs about natives' and refugees' performance can explain our finding of reverse discrimination, we rule out statistical discrimination as the driving force. We discuss explanations based on theories of self-image and identity concerns

    Narratives and Performance - The Case of Stock-broking

    Get PDF
    The performance of individual stockbrokers differs. This paper aims at explaining these differences, or at least at making some sense of them. In a study of fourteen stockbrokers, the high performing brokers described their working life in a systematically different way, compared to the low performing brokers. The high and low performing brokers gave fundamentally different accounts of what, from an outsider’s viewpoint, seemed to be very similar work and working conditions. The brokers’ different accounts are interpreted and reconstructed into two opposing narratives of stockbrokers’ world of working. In an ideal typical sense these two narratives explain, or at least make sense of, the stockbrokers’ different levels of performance.narratives; performance; stock-broking; phenomenography; competence; work; interaction; alienation
    • 

    corecore