130 research outputs found
Recommended from our members
Exploiting traffic data to improve asset management and citizen quality of life
The main goal of this project was to demonstrate how large data sources such as Google Maps can be used to inform transportation-related asset management decisions. Specifically, we investigated how the interdependence between infrastructures and assets can be studied using transportation data and heat maps. This involves linking the effect of disruptions in lower-order assets to travel accessibility to private and public infrastructure. In order to demonstrate the viability of our approach, we conducted 5 case studies, 3 public and 2 private. On the public side, we collaborated with two county councils in the United Kingdom, specifically Cambridgeshire and Hertfordshire, and offered solutions to existing infrastructure-related problems proposed by them. For Cambridgeshire, we analysed the accessibility to Cambridge Universityâs new research centers and the criticality of roads leading to Addenbrookeâs Hospital in Cambridge. Similarly for Hertfordshire, the accessibility to different critical assets in the county were examined with the aim of supporting planning decisions. In addition, to highlight how our approach can bring benefits to private citizens, we solved two examples of commuting-related problems posed by students at the Institute for Manufacturing (IfM). We conclude that heat maps generated using the Google Maps API are powerful and efficient tools for use in infrastructure asset management. Our approach appears to be more cost-efficient and offers a higher quality of visualisation and presentation than other available tools. Furthermore, there exists the potential for a commercial spin-off: our approach can be employed in local, regional and national administrations to inform infrastructure-related decision-making, and can be used by commercial parties to improve employeesâ commutes, parking, et ceteraCentre for Digital Built Britai
Recommended from our members
Inoculating Against Fake News About COVID-19
The outbreak of the SARS-CoV-2 novel coronavirus (COVID-19) has been accompanied by a large amount of misleading and false information about the virus, especially on social media. In this article, we explore the coronavirus âinfodemicâ and how behavioral scientists may seek to address this problem. We detail the scope of the problem and discuss the negative influence that COVID-19 misinformation can have on the widespread adoption of health protective behaviors in the population. In response, we explore how insights from the behavioral sciences can be leveraged to manage an effective societal response to curb the spread of misinformation about the virus. In particular, we discuss the theory of psychological inoculation (or prebunking) as an efficient vehicle for conferring large-scale psychological resistance against fake news
Recommended from our members
Inoculating Against Fake News About COVID-19.
The outbreak of the SARS-CoV-2 novel coronavirus (COVID-19) has been accompanied by a large amount of misleading and false information about the virus, especially on social media. In this article, we explore the coronavirus "infodemic" and how behavioral scientists may seek to address this problem. We detail the scope of the problem and discuss the negative influence that COVID-19 misinformation can have on the widespread adoption of health protective behaviors in the population. In response, we explore how insights from the behavioral sciences can be leveraged to manage an effective societal response to curb the spread of misinformation about the virus. In particular, we discuss the theory of psychological inoculation (or prebunking) as an efficient vehicle for conferring large-scale psychological resistance against fake news
Recommended from our members
Good News about Bad News: Gamified Inoculation Boosts Confidence and Cognitive Immunity Against Fake News.
Recent research has explored the possibility of building attitudinal resistance against online misinformation through psychological inoculation. The inoculation metaphor relies on a medical analogy: by pre-emptively exposing people to weakened doses of misinformation cognitive immunity can be conferred. A recent example is the Bad News game, an online fake news game in which players learn about six common misinformation techniques. We present a replication and extension into the effectiveness of Bad News as an anti-misinformation intervention. We address three shortcomings identified in the original study: the lack of a control group, the relatively low number of test items, and the absence of attitudinal certainty measurements. Using a 2 (treatment vs. control) Ă 2 (pre vs. post) mixed design (N = 196) we measure participants' ability to spot misinformation techniques in 18 fake headlines before and after playing Bad News. We find that playing Bad News significantly improves people's ability to spot misinformation techniques compared to a gamified control group, and crucially, also increases people's level of confidence in their own judgments. Importantly, this confidence boost only occurred for those who updated their reliability assessments in the correct direction. This study offers further evidence for the effectiveness of psychological inoculation against not only specific instances of fake news, but the very strategies used in its production. Implications are discussed for inoculation theory and cognitive science research on fake news
Recommended from our members
Fake news game confers psychological resistance against online misinformation
Abstract: The spread of online misinformation poses serious challenges to societies worldwide. In a novel attempt to address this issue, we designed a psychological intervention in the form of an online browser game. In the game, players take on the role of a fake news producer and learn to master six documented techniques commonly used in the production of misinformation: polarisation, invoking emotions, spreading conspiracy theories, trolling people online, deflecting blame, and impersonating fake accounts. The game draws on an inoculation metaphor, where preemptively exposing, warning, and familiarising people with the strategies used in the production of fake news helps confer cognitive immunity when exposed to real misinformation. We conducted a large-scale evaluation of the game with N = 15,000 participants in a pre-post gameplay design. We provide initial evidence that peopleâs ability to spot and resist misinformation improves after gameplay, irrespective of education, age, political ideology, and cognitive style
Generative Language Models Exhibit Social Identity Biases
The surge in popularity of large language models has given rise to concerns
about biases that these models could learn from humans. In this study, we
investigate whether ingroup solidarity and outgroup hostility, fundamental
social biases known from social science, are present in 51 large language
models. We find that almost all foundational language models and some
instruction fine-tuned models exhibit clear ingroup-positive and
outgroup-negative biases when prompted to complete sentences (e.g., "We
are..."). A comparison of LLM-generated sentences with human-written sentences
on the internet reveals that these models exhibit similar level, if not
greater, levels of bias than human text. To investigate where these biases stem
from, we experimentally varied the amount of ingroup-positive or
outgroup-negative sentences the model was exposed to during fine-tuning in the
context of the United States Democrat-Republican divide. Doing so resulted in
the models exhibiting a marked increase in ingroup solidarity and an even
greater increase in outgroup hostility. Furthermore, removing either
ingroup-positive or outgroup-negative sentences (or both) from the fine-tuning
data leads to a significant reduction in both ingroup solidarity and outgroup
hostility, suggesting that biases can be reduced by removing biased training
data. Our findings suggest that modern language models exhibit fundamental
social identity biases and that such biases can be mitigated by curating
training data. Our results have practical implications for creating less biased
large-language models and further underscore the need for more research into
user interactions with LLMs to prevent potential bias reinforcement in humans.Comment: supplementary material, data, and code see
https://osf.io/9ht32/?view_only=f0ab4b23325f4c31ad3e12a7353b55f
Misinformation interventions decay rapidly without an immediate posttest
In recent years, many kinds of interventions have been developed that seek to reduce susceptibility to misinformation. In two preregistered longitudinal studies (N1 = 503, N2 = 673), we leverage two previously validated âinoculationâ interventions (a video and a game) to address two important questions in misinformation interventions research: (1) whether displaying additional stimuli (such as videos unrelated to misinformation) alongside an intervention interferes with its effectiveness, and (2) whether administering an immediate posttest (in the form of a social media post evaluation task after the intervention) plays a role in the longevity of the intervention. We find no evidence that other stimuli interfere with intervention efficacy, but strong evidence that immediate posttests strengthen the learnings from the intervention. In study 1, we find that 48 h after watching a video, participants who received an immediate posttest continued to be significantly better at discerning untrustworthy social media posts from neutral ones than the control group (d = 0.416, p = .007), whereas participants who only received a posttest 48 h later showed no differences with a control (d = 0.010, p = .854). In study 2, we observe highly similar results for a gamified intervention, and provide evidence for a causal mechanism: immediate posttests help strengthen people's memory of the lessons learned in the intervention. We argue that the active rehearsal and application of relevant information are therefore requirements for the longevity of learningâbased misinformation interventions, which has substantial implications for their scalability
Active inoculation boosts attitudinal resistance against extremist persuasion techniques: a novel approach towards the prevention of violent extremism
The Internet is gaining relevance as a platform where extremist organizations seek to recruit new members. For this preregistered study, we developed and tested a novel online game, Radicalise, which aims to combat the effectiveness of online recruitment strategies used by extremist organizations, based on the principles of active psychological inoculation. The game âinoculatesâ players by exposing them to severely weakened doses of the key techniques and methods used to recruit and radicalize individuals via social media platforms: identifying vulnerable individuals, gaining their trust, isolating them from their community and pressuring them into committing a criminal act in the name of the extremist organization. To test the game's effectiveness, we conducted a preregistered 2 Ă 2 mixed (preâpost) randomized controlled experiment (n = 291) with two outcome measures. The first measured participantsâ ability and confidence in assessing the manipulativeness of fictitious WhatsApp messages making use of an extremist manipulation technique before and after playing. The second measured participantsâ ability to identify what factors make an individual vulnerable to extremist recruitment using 10 profile vignettes, also before and after playing. We find that playing Radicalise significantly improves participantsâ ability and confidence in spotting manipulative messages and the characteristics associated with vulnerability
How Accurate Are Accuracy-Nudge Interventions? A Preregistered Direct Replication of Pennycook et al. (2020).
Funder: Defense Advanced Research Projects Agency; FundRef: https://doi.org/10.13039/100000185Funder: Winton Centre for Risk & Evidence CommunicationFunder: David & Claudia Harding FoundationAs part of the Systematizing Confidence in Open Research and Evidence (SCORE) program, the present study consisted of a two-stage replication test of a central finding by Pennycook et al. (2020), namely that asking people to think about the accuracy of a single headline improves "truth discernment" of intentions to share news headlines about COVID-19. The first stage of the replication test (n = 701) was unsuccessful (p = .67). After collecting a second round of data (additional n = 882, pooled N = 1,583), we found a small but significant interaction between treatment condition and truth discernment (uncorrected p = .017; treatment: d = 0.14, control: d = 0.10). As in the target study, perceived headline accuracy correlated with treatment impact, so that treatment-group participants were less willing to share headlines that were perceived as less accurate. We discuss potential explanations for these findings and an unreported change in the hypothesis (but not the analysis plan) from the preregistration in the original study
Recommended from our members
Disentangling Item and Testing Effects in Inoculation Research on Online Misinformation: Solomon Revisited
Online misinformation is a pervasive global problem. In response, psychologists have recently explored the theory of psychological inoculation: If people are preemptively exposed to a weakened version of a misinformation technique, they can build up cognitive resistance. This study addresses two unanswered methodological questions about a widely adopted online âfake newsâ inoculation game, Bad News. First, research in this area has often looked at pre- and post-intervention difference scores for the same items, which may imply that any observed effects are specific to the survey items themselves (item effects). Second, it is possible that using a pretest influences the outcome variable of interest, or that the pretest may interact with the intervention (testing effects). We investigate both item and testing effects in two online studies (total N = 2,159) using the Bad News game. For the item effect, we examine if inoculation effects are still observed when different items are used in the pre- and posttest. To examine the testing effect, we use a Solomonâs Three Group Design. We find that inoculation interventions are somewhat influenced by item effects, and not by testing effects. We show that inoculation interventions are effective at improving peopleâs ability to spot misinformation techniques and that the Bad News game does not make people more skeptical of real news. We discuss the larger relevance of these findings for evaluating real-world psychological interventions
- âŠ