13,641 research outputs found

    Empirical Evaluation of Automated Sentiment Analysis as a Decision Aid

    Get PDF
    Research has consistently shown that online word-of-mouth (WOM) plays an important role in shaping customer attitudes and behaviors. Yet, despite their documented utility, explicit user scores, such as star ratings have limitations in certain contexts. Automatic sentiment analysis (SA), an analytics technique that assesses the “tone” of text, has been proposed as a way to deal with these shortcomings. While extant research on SA has focused on issues surrounding the design of algorithms and output accuracy, this research-in-progress examines the behavioral and interface design issues in regards to SA scores as perceived by their intended users. Specifically, in an online context, we experimentally investigate the role of product (product category) and review characteristics (review extremity) in influencing the perceived usefulness of SA scores. Further, we investigate whether variations in how the SA scores are presented to the user, and the nature of the scores themselves further affect user perceptions

    Recognizing cited facts and principles in legal judgements

    Get PDF
    In common law jurisdictions, legal professionals cite facts and legal principles from precedent cases to support their arguments before the court for their intended outcome in a current case. This practice stems from the doctrine of stare decisis, where cases that have similar facts should receive similar decisions with respect to the principles. It is essential for legal professionals to identify such facts and principles in precedent cases, though this is a highly time intensive task. In this paper, we present studies that demonstrate that human annotators can achieve reasonable agreement on which sentences in legal judgements contain cited facts and principles (respectively, κ=0.65 and κ=0.95 for inter- and intra-annotator agreement). We further demonstrate that it is feasible to automatically annotate sentences containing such legal facts and principles in a supervised machine learning framework based on linguistic features, reporting per category precision and recall figures of between 0.79 and 0.89 for classifying sentences in legal judgements as cited facts, principles or neither using a Bayesian classifier, with an overall κ of 0.72 with the human-annotated gold standard
    corecore