3 research outputs found

    Off-The-Shelf Artificial Intelligence Technologies for Sentiment and Emotion Analysis: A Tutorial on Using IBM Natural Language Processing

    Get PDF
    Artificial intelligence (AI) rests on the premise that machines can behave in a human-like way and potentially solve complex analytics problems. In recent years, we have seen several off-the-shelf AI technologies that claim to be ready to use. In this paper, we illustrate how one can use one such technology, called IBM Natural Language Understanding (NLU), to solve a data-analytics problem. First, we provide a detailed step-by-step tutorial on how to use NLU. Next, we introduce our case study in which we investigated the implications of Starbucks’ pledge to hire refugees. In this context, we used NLU to assign sentiment and emotion scores to social-media posts related to Starbucks made before and after the pledge. We found that consumers’ sentiment towards Starbucks became more positive after the pledge whereas investors’ sentiment became more negative. Interestingly, we found no significant relationship between consumers’ and investors’ sentiments. With help from NLU, we also found that consumers’ sentiments lacked consensus in that their social media posts contained a great deal of mixed emotions. As part of our case study, we found that NLU correctly classified the polarity of sentiments 72.64 percent of the time, an accuracy value much higher than the 49.77 percent that the traditional bag-of-words approach achieved. Besides illustrating how practitioners/researchers can use off-the-shelf AI technologies in practice, we believe the results from our case study provide value to organizations interested in implementing corporate social responsibility policies

    How many crowdsourced workers should a requester hire?

    Get PDF
    Recent years have seen an increased interest in crowdsourcing as a way of obtaining information from a potentially large group of workers at a reduced cost. The crowdsourcing process, as we consider in this paper, is as follows: a requester hires a number of workers to work on a set of similar tasks. After completing the tasks, each worker reports back outputs. The requester then aggregates the reported outputs to obtain aggregate outputs. A crucial question that arises during this process is: how many crowd workers should a requester hire? In this paper, we investigate from an empirical perspective the optimal number of workers a requester should hire when crowdsourcing tasks, with a particular focus on the crowdsourcing platform Amazon Mechanical Turk. Specifically, we report the results of three studies involving different tasks and payment schemes. We find that both the expected error in the aggregate outputs as well as the risk of a poor combination of workers decrease as the number of workers increases. Surprisingly, we find that the optimal number of workers a requester should hire for each task is around 10 to 11, no matter the underlying task and payment scheme. To derive such a result, we employ a principled analysis based on bootstrapping and segmented linear regression. Besides the above result, we also find that overall top-performing workers are more consistent across multiple tasks than other workers. Our results thus contribute to a better understanding of, and provide new insights into, how to design more effective crowdsourcing processes
    corecore