2,853 research outputs found

    Reputation Agent: Prompting Fair Reviews in Gig Markets

    Full text link
    Our study presents a new tool, Reputation Agent, to promote fairer reviews from requesters (employers or customers) on gig markets. Unfair reviews, created when requesters consider factors outside of a worker's control, are known to plague gig workers and can result in lost job opportunities and even termination from the marketplace. Our tool leverages machine learning to implement an intelligent interface that: (1) uses deep learning to automatically detect when an individual has included unfair factors into her review (factors outside the worker's control per the policies of the market); and (2) prompts the individual to reconsider her review if she has incorporated unfair factors. To study the effectiveness of Reputation Agent, we conducted a controlled experiment over different gig markets. Our experiment illustrates that across markets, Reputation Agent, in contrast with traditional approaches, motivates requesters to review gig workers' performance more fairly. We discuss how tools that bring more transparency to employers about the policies of a gig market can help build empathy thus resulting in reasoned discussions around potential injustices towards workers generated by these interfaces. Our vision is that with tools that promote truth and transparency we can bring fairer treatment to gig workers.Comment: 12 pages, 5 figures, The Web Conference 2020, ACM WWW 202

    Studying and Modeling the Connection between People's Preferences and Content Sharing

    Full text link
    People regularly share items using online social media. However, people's decisions around sharing---who shares what to whom and why---are not well understood. We present a user study involving 87 pairs of Facebook users to understand how people make their sharing decisions. We find that even when sharing to a specific individual, people's own preference for an item (individuation) dominates over the recipient's preferences (altruism). People's open-ended responses about how they share, however, indicate that they do try to personalize shares based on the recipient. To explain these contrasting results, we propose a novel process model of sharing that takes into account people's preferences and the salience of an item. We also present encouraging results for a sharing prediction model that incorporates both the senders' and the recipients' preferences. These results suggest improvements to both algorithms that support sharing in social media and to information diffusion models.Comment: CSCW 201

    Crowdsourcing a Word-Emotion Association Lexicon

    Full text link
    Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word-emotion and word-polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotion-annotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion

    Augmenting the performance of image similarity search through crowdsourcing

    Get PDF
    Crowdsourcing is defined as “outsourcing a task that is traditionally performed by an employee to a large group of people in the form of an open call” (Howe 2006). Many platforms designed to perform several types of crowdsourcing and studies have shown that results produced by crowds in crowdsourcing platforms are generally accurate and reliable. Crowdsourcing can provide a fast and efficient way to use the power of human computation to solve problems that are difficult for machines to perform. From several different microtasking crowdsourcing platforms available, we decided to perform our study using Amazon Mechanical Turk. In the context of our research we studied the effect of user interface design and its corresponding cognitive load on the performance of crowd-produced results. Our results highlighted the importance of a well-designed user interface on crowdsourcing performance. Using crowdsourcing platforms such as Amazon Mechanical Turk, we can utilize humans to solve problems that are difficult for computers, such as image similarity search. However, in tasks like image similarity search, it is more efficient to design a hybrid human–machine system. In the context of our research, we studied the effect of involving the crowd on the performance of an image similarity search system and proposed a hybrid human–machine image similarity search system. Our proposed system uses machine power to perform heavy computations and to search for similar images within the image dataset and uses crowdsourcing to refine results. We designed our content-based image retrieval (CBIR) system using SIFT, SURF, SURF128 and ORB feature detector/descriptors and compared the performance of the system using each feature detector/descriptor. Our experiment confirmed that crowdsourcing can dramatically improve the CBIR system performance

    Loud and Trendy: Crowdsourcing Impressions of Social Ambiance in Popular Indoor Urban Places

    Get PDF
    New research cutting across architecture, urban studies, and psychology is contextualizing the understanding of urban spaces according to the perceptions of their inhabitants. One fundamental construct that relates place and experience is ambiance, which is defined as "the mood or feeling associated with a particular place". We posit that the systematic study of ambiance dimensions in cities is a new domain for which multimedia research can make pivotal contributions. We present a study to examine how images collected from social media can be used for the crowdsourced characterization of indoor ambiance impressions in popular urban places. We design a crowdsourcing framework to understand suitability of social images as data source to convey place ambiance, to examine what type of images are most suitable to describe ambiance, and to assess how people perceive places socially from the perspective of ambiance along 13 dimensions. Our study is based on 50,000 Foursquare images collected from 300 popular places across six cities worldwide. The results show that reliable estimates of ambiance can be obtained for several of the dimensions. Furthermore, we found that most aggregate impressions of ambiance are similar across popular places in all studied cities. We conclude by presenting a multidisciplinary research agenda for future research in this domain

    The Use of Online Panel Data in Management Research: A Review and Recommendations

    Get PDF
    Management scholars have long depended on convenience samples to conduct research involving human participants. However, the past decade has seen an emergence of a new convenience sample: online panels and online panel participants. The data these participants provide—online panel data (OPD)—has been embraced by many management scholars owing to the numerous benefits it provides over “traditional” convenience samples. Despite those advantages, OPD has not been warmly received by all. Currently, there is a divide in the field over the appropriateness of OPD in management scholarship. Our review takes aim at the divide with the goal of providing a common understanding of OPD and its utility and providing recommendations regarding when and how to use OPD and how and where to publish it. To accomplish these goals, we inventoried and reviewed OPD use across 13 management journals spanning 2006 to 2017. Our search resulted in 804 OPD-based studies across 439 articles. Notably, our search also identified 26 online panel platforms (“brokers”) used to connect researchers with online panel participants. Importantly, we offer specific guidance to authors, reviewers, and editors, having implications for both micro and macro management scholars

    Dynamic Estimation of Rater Reliability using Multi-Armed Bandits

    Get PDF
    One of the critical success factors for supervised machine learning is the quality of target values, or predictions, associated with training instances. Predictions can be discrete labels (such as a binary variable specifying whether a blog post is positive or negative) or continuous ratings (for instance, how boring a video is on a 10-point scale). In some areas, predictions are readily available, while in others, the eort of human workers has to be involved. For instance, in the task of emotion recognition from speech, a large corpus of speech recordings is usually available, and humans denote which emotions are present in which recordings

    A Cultural Comparison of the Facial Inference Process

    Get PDF
    The purpose of this study was to compare emotion and personality trait attributions to facial expressions between American and Asian Indian samples. Data were collected using Amazon.com’s Mechanical Turk (MTurk). Participants in this study were asked to infer the emotions and personality traits shown in three facial expressions (scowling, frowning, and smiling) of young white females and males in six photographs. Each picture was randomly presented for 10 seconds followed by four randomized questions about the individual in the picture. The first question asked participants to identify the emotion shown from a list of six emotions (anger, disgust, fear, happiness, sadness, surprise). The next three questions consisted of condensed sets of the Big Five personality adjective markers (Minimarkers) (Saucier, 1994), the three Self-Assessment Manikin dimensions (SAM) (Bradley & Lang, 1994), and items related to attractiveness, perceived motivation, and morality inferences. In this study, the “Halo” and “Horns” effects were hypothesized to occur for both cultures, with some cultural differences. Smiling facial expressions (male and female) were hypothesized and found to have higher emotion judgment accuracy (happiness) and more inferred positive personality traits for both cultures (attractive, not threatening, agreeable, extroverted, pleasing to look at, positive, conscientious, and open-minded). Scowling facial expressions were hypothesized to have the following attributions: anger, unattractive, threatening, excitable, close-minded, not pleasing to look at, bad, negative, dominant, disagreeable, and unconscientious. Frowning facial expressions were hypothesized to be perceived as: sad, unattractive, good, submissive, not threatening, not pleasing to look at, positive, and calm. The results for the smiling and frowning facial expressions showed high mean answer choice accuracy for both cultures regardless of gender in the photograph. Greater accuracy in emotion and trait attributions was hypothesized for U.S. participants because collectivist cultures (India) have trouble expressing and identifying negative emotions since they disturb the harmony of the social group (Matsumoto, 1989, 1992a; Schimmack, 1996). However, results showed that both cultures attributed the correct emotional inference and personality trait attributions to the six facial expressions for all four questions, except for the Indians on the scowling female facial expression across each of the four questions
    • …
    corecore