4 research outputs found

    Characterizing Collective Attention via Descriptor Context: A Case Study of Public Discussions of Crisis Events

    Full text link
    Social media datasets make it possible to rapidly quantify collective attention to emerging topics and breaking news, such as crisis events. Collective attention is typically measured by aggregate counts, such as the number of posts that mention a name or hashtag. But according to rationalist models of natural language communication, the collective salience of each entity will be expressed not only in how often it is mentioned, but in the form that those mentions take. This is because natural language communication is premised on (and customized to) the expectations that speakers and writers have about how their messages will be interpreted by the intended audience. We test this idea by conducting a large-scale analysis of public online discussions of breaking news events on Facebook and Twitter, focusing on five recent crisis events. We examine how people refer to locations, focusing specifically on contextual descriptors, such as "San Juan" versus "San Juan, Puerto Rico." Rationalist accounts of natural language communication predict that such descriptors will be unnecessary (and therefore omitted) when the named entity is expected to have high prior salience to the reader. We find that the use of contextual descriptors is indeed associated with proxies for social and informational expectations, including macro-level factors like the location's global salience and micro-level factors like audience engagement. We also find a consistent decrease in descriptor context use over the lifespan of each crisis event. These findings provide evidence about how social media users communicate with their audiences, and point towards more fine-grained models of collective attention that may help researchers and crisis response organizations to better understand public perception of unfolding crisis events.Comment: ICWSM 202

    Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events

    Get PDF
    This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.Comment: In Proc. CHI 2018 Papers program. Please cite: El Ali, A., Stratmann, T., Park, S., Sch\"oning, J., Heuten, W. & Boll, S. (2018). Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA. DOI: https://doi.org/10.1145/3173574.317413

    Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events

    Get PDF
    This paper investigates bias in coverage between Western and Arab media on Twitter after the November 2015 Beirut and Paris terror attacks. Using two Twitter datasets covering each attack, we investigate how Western and Arab media differed in coverage bias, sympathy bias, and resulting information propagation. We crowdsourced sympathy and sentiment labels for 2,390 tweets across four languages (English, Arabic, French, German), built a regression model to characterize sympathy, and thereafter trained a deep convolutional neural network to predict sympathy. Key findings show: (a) both events were disproportionately covered (b) Western media exhibited less sympathy, where each media coverage was more sympathetic towards the country affected in their respective region (c) Sympathy predictions supported ground truth analysis that Western media was less sympathetic than Arab media (d) Sympathetic tweets do not spread any further. We discuss our results in light of global news flow, Twitter affordances, and public perception impact.Comment: In Proc. CHI 2018 Papers program. Please cite: El Ali, A., Stratmann, T., Park, S., Sch\"oning, J., Heuten, W. & Boll, S. (2018). Measuring, Understanding, and Classifying News Media Sympathy on Twitter after Crisis Events. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA. DOI: https://doi.org/10.1145/3173574.317413

    The laws of "LOL": Computational approaches to sociolinguistic variation in online discussions

    Get PDF
    When speaking or writing, a person often chooses one form of language over another based on social constraints, including expectations in a conversation, participation in a global change, or expression of underlying attitudes. Sociolinguistic variation (e.g. choosing "going" versus "goin'") can reveal consistent social differences such as dialects and consistent social motivations such as audience design. While traditional sociolinguistics studies variation in spoken communication, computational sociolinguistics investigates written communication on social media. The structured nature of online discussions and the diversity of language patterns allow computational sociolinguists to test highly specific hypotheses about communication, such different configurations of listener "audience." Studying communication choices in online discussions sheds light on long-standing sociolinguistic questions that are hard to tackle, and helps social media platforms anticipate their members' complicated patterns of participation in conversations. To that end, this thesis explores open questions in sociolinguistic research by quantifying language variation patterns in online discussions. I leverage the "birds-eye" view of social media to focus on three major questions in sociolinguistics research relating to authors' participation in online discussions. First, I test the role of conversation expectations in the context of content bans and crisis events, and I show that authors vary their language to adjust to audience expectations in line with community standards and shared knowledge. Next, I investigate language change in online discussions and show that language structure, more than social context, explains word adoption. Lastly, I investigate the expression of social attitudes among multilingual speakers, and I find that such attitudes can explain language choice when the attitudes have a clear social meaning based on the discussion context. This thesis demonstrates the rich opportunities that social media provides for addressing sociolinguistic questions and provides insight into how people adapt to the communication affordances in online platforms.Ph.D
    corecore