SOCIAL MEDIA USE, TRUST AND TECHNOLOGY ACCEPTANCE: INVESTIGATING THE EFFECTIVENESS OF A CO-CREATED BROWSER PLUGIN IN MITIGATING THE SPREAD OF MISINFORMATION ON SOCIAL MEDIA

Abstract

Social media have become online spaces where misinformation abounds and spreads virally in the absence of professional gatekeeping. This information landscape requires everyday citizens, who rely on these technologies to access information, to cede control of information. This work sought to examine whether the control of information can be regained by humans with the support of a co-created browser plugin, which integrated credibility labels and nudges, and was informed by artificial intelligence models and rule engines. Given the literature on the complexity of information evaluation on social media, we investigated the role of technological, situational and individual characteristics in “liking” or “sharing” misinformation. We adopted a mixed-methods research design with 80 participants from four European sites, who viewed a curated timeline of credible and non-credible posts on Twitter, with (n=40) or without (n=40) the presence of the plugin. The role of the technological intervention was important: the absence of the plugin strongly correlated with misinformation endorsement (via “liking”). Trust in the technology and technology acceptance were correlated and emerged as important situational characteristics, with participants with higher trust profiles being less likely to share misinformation. Findings on individual characteristics indicated that only social media use was a significant predictor for trusting the plugin. This work extends ongoing research on deterring the spread of misinformation by situating the findings in an authentic social media environment using a co-created technological intervention. It holds implications for how to support a misinformation-resilient citizenry with the use of artificial intelligence-driven tools

    Similar works