3 research outputs found

    Automatic Distinction Between Twitter Bots and Humans

    Get PDF
    Weak artificial intelligence uses encoded functions of rules to process information. This kind of intelligence is competent, but lacks consciousness, and therefore cannot comprehend what it is doing. In another view, strong artificial intelligence has a mind of its own that resembles a human mind. Many of the bots on Twitter are only following a set of encoded rules. Previous studies have created machine learning algorithms to determine whether a Twitter account was being run by a human or a bot. Twitter bots are improving and some are even fooling humans. Creating a machine learning algorithm that differentiates a bot from a human gets harder as bots behave more like humans. The goal of the present study focuses on the interaction between humans and computers, and how some Twitter bots seem to trick people into thinking they are human. This thesis evaluates and compares decision trees, random forests of decision trees, and naïve Bayes classifiers to determine which of these machine learning approaches yields the best performance. Each algorithm uses features available to users on Twitter. Sentiment analysis of a tweet is used determine whether or not bots are displaying any emotion, which is used as a feature in each algorithm. Results show some of the most useful features for classifying bots versus humans are the total number of favorites a tweet had, total number of tweets from the account, and whether or not the tweet contained a link. Random forests are able to detect bots 100% of the time on one of the datasets used in this work. Random forest gave the best overall performance of the machine learning approaches used; user-based features were identified as being the most important; and bots did not display high intensities of emotions

    Promoting and countering misinformation during Australia’s 2019–2020 bushfires: a case study of polarisation

    Get PDF
    During Australia’s unprecedented bushfires in 2019–2020, misinformation blaming arson surfaced on Twitter using #ArsonEmergency. The extent to which bots and trolls were responsible for disseminating and amplifying this misinformation has received media scrutiny and academic research. Here, we study Twitter communities spreading this misinformation during the newsworthy event, and investigate the role of online communities using a natural experiment approach—before and after reporting of bots promoting the hashtag was broadcast by the mainstream media. Few bots were found, but the most bot-like accounts were social bots, which present as genuine humans, and trolling behaviour was evident. Further, we distilled meaningful quantitative differences between two polarised communities in the Twitter discussion, resulting in the following insights. First, Supporters of the arson narrative promoted misinformation by engaging others directly with replies and mentions using hashtags and links to external sources. In response, Opposers retweeted fact-based articles and official information. Second, Supporters were embedded throughout their interaction networks, but Opposers obtained high centrality more efciently despite their peripheral positions. By the last phase, Opposers and unaffliated accounts appeared to coordinate, potentially reaching a broader audience. Finally, the introduction of the bot report changed the discussion dynamic: Opposers only responded immediately, while Supporters countered strongly for days, but new unafiliated accounts drawn into the discussion shifted the dominant narrative from arson misinformation to factual and official information. This foiled Supporters’ efforts, highlighting the value of exposing misinformation. We speculate that the communication strategies observed here could inform counter-strategies in other misinformation-related discussions.Derek Weber, Lucia Falzon, Lewis Mitchell, Mehwish Nasi
    corecore