3 research outputs found

    #ArsonEmergency and Australia's "Black Summer": Polarisation and misinformation on social media

    Full text link
    During the summer of 2019-20, while Australia suffered unprecedented bushfires across the country, false narratives regarding arson and limited backburning spread quickly on Twitter, particularly using the hashtag #ArsonEmergency. Misinformation and bot- and troll-like behaviour were detected and reported by social media researchers and the news soon reached mainstream media. This paper examines the communication and behaviour of two polarised online communities before and after news of the misinformation became public knowledge. Specifically, the Supporter community actively engaged with others to spread the hashtag, using a variety of news sources pushing the arson narrative, while the Opposer community engaged less, retweeted more, and focused its use of URLs to link to mainstream sources, debunking the narratives and exposing the anomalous behaviour. This influenced the content of the broader discussion. Bot analysis revealed the active accounts were predominantly human, but behavioural and content analysis suggests Supporters engaged in trolling, though both communities used aggressive language.Comment: 16 pages, 8 images, presented at the 2nd Multidisciplinary International Symposium on Disinformation in Open Online Media (MISDOOM 2020), Leiden, The Netherlands. Published in: van Duijn M., Preuss M., Spaiser V., Takes F., Verberne S. (eds) Disinformation in Open Online Media. MISDOOM 2020. Lecture Notes in Computer Science, vol 12259. Springer, Cham. https://doi.org/10.1007/978-3-030-61841-4_1

    Promoting and countering misinformation during Australia’s 2019–2020 bushfires: a case study of polarisation

    Get PDF
    During Australia’s unprecedented bushfires in 2019–2020, misinformation blaming arson surfaced on Twitter using #ArsonEmergency. The extent to which bots and trolls were responsible for disseminating and amplifying this misinformation has received media scrutiny and academic research. Here, we study Twitter communities spreading this misinformation during the newsworthy event, and investigate the role of online communities using a natural experiment approach—before and after reporting of bots promoting the hashtag was broadcast by the mainstream media. Few bots were found, but the most bot-like accounts were social bots, which present as genuine humans, and trolling behaviour was evident. Further, we distilled meaningful quantitative differences between two polarised communities in the Twitter discussion, resulting in the following insights. First, Supporters of the arson narrative promoted misinformation by engaging others directly with replies and mentions using hashtags and links to external sources. In response, Opposers retweeted fact-based articles and official information. Second, Supporters were embedded throughout their interaction networks, but Opposers obtained high centrality more efciently despite their peripheral positions. By the last phase, Opposers and unaffliated accounts appeared to coordinate, potentially reaching a broader audience. Finally, the introduction of the bot report changed the discussion dynamic: Opposers only responded immediately, while Supporters countered strongly for days, but new unafiliated accounts drawn into the discussion shifted the dominant narrative from arson misinformation to factual and official information. This foiled Supporters’ efforts, highlighting the value of exposing misinformation. We speculate that the communication strategies observed here could inform counter-strategies in other misinformation-related discussions.Derek Weber, Lucia Falzon, Lewis Mitchell, Mehwish Nasi

    On commenting behavior of facebook users

    No full text
    corecore