7,850 research outputs found

    Hot Streaks on Social Media

    Full text link
    Measuring the impact and success of human performance is common in various disciplines, including art, science, and sports. Quantifying impact also plays a key role on social media, where impact is usually defined as the reach of a user's content as captured by metrics such as the number of views, likes, retweets, or shares. In this paper, we study entire careers of Twitter users to understand properties of impact. We show that user impact tends to have certain characteristics: First, impact is clustered in time, such that the most impactful tweets of a user appear close to each other. Second, users commonly have 'hot streaks' of impact, i.e., extended periods of high-impact tweets. Third, impact tends to gradually build up before, and fall off after, a user's most impactful tweet. We attempt to explain these characteristics using various properties measured on social media, including the user's network, content, activity, and experience, and find that changes in impact are associated with significant changes in these properties. Our findings open interesting avenues for future research on virality and influence on social media.Comment: Accepted as a full paper at ICWSM 2019. Please cite the ICWSM versio

    Reverse-Engineering Satire, or "Paper on Computational Humor Accepted Despite Making Serious Advances"

    Full text link
    Humor is an essential human trait. Efforts to understand humor have called out links between humor and the foundations of cognition, as well as the importance of humor in social engagement. As such, it is a promising and important subject of study, with relevance for artificial intelligence and human-computer interaction. Previous computational work on humor has mostly operated at a coarse level of granularity, e.g., predicting whether an entire sentence, paragraph, document, etc., is humorous. As a step toward deep understanding of humor, we seek fine-grained models of attributes that make a given text humorous. Starting from the observation that satirical news headlines tend to resemble serious news headlines, we build and analyze a corpus of satirical headlines paired with nearly identical but serious headlines. The corpus is constructed via Unfun.me, an online game that incentivizes players to make minimal edits to satirical headlines with the goal of making other players believe the results are serious headlines. The edit operations used to successfully remove humor pinpoint the words and concepts that play a key role in making the original, satirical headline funny. Our analysis reveals that the humor tends to reside toward the end of headlines, and primarily in noun phrases, and that most satirical headlines follow a certain logical pattern, which we term false analogy. Overall, this paper deepens our understanding of the syntactic and semantic structure of satirical news headlines and provides insights for building humor-producing systems.Comment: Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 201

    When Sheep Shop: Measuring Herding Effects in Product Ratings with Natural Experiments

    Full text link
    As online shopping becomes ever more prevalent, customers rely increasingly on product rating websites for making purchase decisions. The reliability of online ratings, however, is potentially compromised by the so-called herding effect: when rating a product, customers may be biased to follow other customers' previous ratings of the same product. This is problematic because it skews long-term customer perception through haphazard early ratings. The study of herding poses methodological challenges. In particular, observational studies are impeded by the lack of counterfactuals: simply correlating early with subsequent ratings is insufficient because we cannot know what the subsequent ratings would have looked like had the first ratings been different. The methodology introduced here exploits a setting that comes close to an experiment, although it is purely observational---a natural experiment. Our key methodological device consists in studying the same product on two separate rating sites, focusing on products that received a high first rating on one site, and a low first rating on the other. This largely controls for confounds such as a product's inherent quality, advertising, and producer identity, and lets us isolate the effect of the first rating on subsequent ratings. In a case study, we focus on beers as products and jointly study two beer rating sites, but our method applies to any pair of sites across which products can be matched. We find clear evidence of herding in beer ratings. For instance, if a beer receives a very high first rating, its second rating is on average half a standard deviation higher, compared to a situation where the identical beer receives a very low first rating. Moreover, herding effects tend to last a long time and are noticeable even after 20 or more ratings. Our results have important implications for the design of better rating systems.Comment: Submitted at WWW2018 - April 2018 (10 pages, 6 figures, 6 tables); Added Acknowledgement

    Positive and Negative Congruency Effects in Masked Priming: A Neuro-computational Model Based on Representation Strength and Attention

    Get PDF
    Positive priming effects have been found with a short time between the prime and the target, while negative priming effects (i.e., a congruent prime causes longer RTs) have been found with a long time between the prime and the target. In the current study, positive and negative priming effects were found using stimuli that have strong and weak representations, respectively, without changing the time between prime and target. A model was developed that fits our results. The model also fits a wide range of previous results in this area. In contrast to other approaches our model depends on attentional neuro-modulation not motor self-inhibition

    Quootstrap: Scalable Unsupervised Extraction of Quotation-Speaker Pairs from Large News Corpora via Bootstrapping

    Full text link
    We propose Quootstrap, a method for extracting quotations, as well as the names of the speakers who uttered them, from large news corpora. Whereas prior work has addressed this problem primarily with supervised machine learning, our approach follows a fully unsupervised bootstrapping paradigm. It leverages the redundancy present in large news corpora, more precisely, the fact that the same quotation often appears across multiple news articles in slightly different contexts. Starting from a few seed patterns, such as ["Q", said S.], our method extracts a set of quotation-speaker pairs (Q, S), which are in turn used for discovering new patterns expressing the same quotations; the process is then repeated with the larger pattern set. Our algorithm is highly scalable, which we demonstrate by running it on the large ICWSM 2011 Spinn3r corpus. Validating our results against a crowdsourced ground truth, we obtain 90% precision at 40% recall using a single seed pattern, with significantly higher recall values for more frequently reported (and thus likely more interesting) quotations. Finally, we showcase the usefulness of our algorithm's output for computational social science by analyzing the sentiment expressed in our extracted quotations.Comment: Accepted at the 12th International Conference on Web and Social Media (ICWSM), 201

    Increasing EHR Use for Quality Improvement in Community Health Centers: The Role of Networks

    Get PDF
    Describes how five community health center networks helped implement electronic health records to improve chronic and preventive care, as well as the obstacles they faced, including limited software capabilities, funding, and ability to share resources

    How Constraints Affect Content: The Case of Twitter's Switch from 140 to 280 Characters

    Full text link
    It is often said that constraints affect creative production, both in terms of form and quality. Online social media platforms frequently impose constraints on the content that users can produce, limiting the range of possible contributions. Do these restrictions tend to push creators towards producing more or less successful content? How do creators adapt their contributions to fit the limits imposed by social media platforms? To answer these questions, we conduct an observational study of a recent event: on November 7, 2017, Twitter changed the maximum allowable length of a tweet from 140 to 280 characters, thereby significantly altering its signature constraint. In the first study of this switch, we compare tweets with nearly or exactly 140 characters before the change to tweets of the same length posted after the change. This setup enables us to characterize how users alter their tweets to fit the constraint and how this affects their tweets' success. We find that in response to a length constraint, users write more tersely, use more abbreviations and contracted forms, and use fewer definite articles. Also, although in general tweet success increases with length, we find initial evidence that tweets made to fit the 140-character constraint tend to be more successful than similar-length tweets written when the constraint was removed, suggesting that the length constraint improved tweet quality.Comment: To appear in the Proceedings of AAAI ICWSM 201

    Leveling the Field: Talking Levels in Cognitive Science

    Get PDF
    Talk of levels is everywhere in cognitive science. Whether it is in terms of adjudicating longstanding debates or motivating foundational concepts, one cannot go far without hearing about the need to talk at different ‘levels’. Yet in spite of its widespread application and use, the concept of levels has received little sustained attention within cognitive science. This paper provides an analysis of the various ways the notion of levels has been deployed within cognitive science. The paper begins by introducing and motivating discussion via four representative accounts of levels. It then turns to outlining and relating the four accounts using two dimensions of comparison. The result is the creation of a conceptual framework that maps the logical space of levels talk, which offers an important step toward making sense of levels talk within cognitive science

    Characterising the ‘Txt2Stop’ Smoking Cessation Text Messaging Intervention in Terms of Behaviour Change Techniques

    Get PDF
    The ‘Txt2Stop’ SMS messaging programme has been found to double smokers’ chances of stopping. It is important to characterise the content of this information in terms of specific behaviour change techniques (BCTs) for the purpose of future development. This study aimed to (i) extend a proven system for coding BCTs to text messaging and (ii) characterise Txt2Stop using this system. A taxonomy previously used to specify BCTs in face-to-face behavioural support for smoking cessation was adapted for the Txt2Stop messages and inter-rater reliability for the adapted system assessed. The system was then applied to all the messages in the Txt2Stop programme to determine its profile in terms of BCTs used. The text message taxonomy comprised 34 BCTs. Inter-rater reliability was moderate, reaching a ceiling of 61% for the core program messages with all discrepancies readily resolved. Of 899 texts delivering BCTs, 218 aimed to maintain motivation to remain abstinent, 870 to enhance self-regulatory capacity or skills, 39 to promote use of adjuvant behaviours such as using stop-smoking medication, 552 to maintain engagement with the intervention and 24 were general communication techniques. The content of Txt2Stop focuses on helping smokers with self-regulation and maintaining engagement with the intervention. The intervention focuses to a lesser extent on boosting motivation to remain abstinent; little attention is given to promoting effective use of adjuvant behaviours such as use of nicotine replacement therapy. As new interventions of this kind are developed it will be possible to compare their effectiveness and relate this to standardised descriptions of their content using this system.</jats:p

    Structuring Wikipedia Articles with Section Recommendations

    Full text link
    Sections are the building blocks of Wikipedia articles. They enhance readability and can be used as a structured entry point for creating and expanding articles. Structuring a new or already existing Wikipedia article with sections is a hard task for humans, especially for newcomers or less experienced editors, as it requires significant knowledge about how a well-written article looks for each possible topic. Inspired by this need, the present paper defines the problem of section recommendation for Wikipedia articles and proposes several approaches for tackling it. Our systems can help editors by recommending what sections to add to already existing or newly created Wikipedia articles. Our basic paradigm is to generate recommendations by sourcing sections from articles that are similar to the input article. We explore several ways of defining similarity for this purpose (based on topic modeling, collaborative filtering, and Wikipedia's category system). We use both automatic and human evaluation approaches for assessing the performance of our recommendation system, concluding that the category-based approach works best, achieving precision@10 of about 80% in the human evaluation.Comment: SIGIR '18 camera-read
    • …
    corecore