31 research outputs found

    When does an individual accept misinformation? An extended investigation through cognitive modeling

    Get PDF

    The structure of social influence in recommender networks

    Get PDF

    Accelerating dynamics of collective attention

    Get PDF
    The impacts of technological development on social sphere lack strong empirical foundation. Here the authors presented quantitative analysis of the phenomenon of social acceleration across a range of digital datasets and found that interest appears in bursts that dissipate on decreasing timescales and occur with increasing frequency

    Time pressure reduces misinformation discrimination ability but does not alter response bias

    Get PDF
    Abstract Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people’s ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants’ ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation

    A simple self-reflection intervention boosts the detection of targeted advertising

    Get PDF
    Abstract Online platforms’ data give advertisers the ability to “microtarget” recipients’ personal vulnerabilities by tailoring different messages for the same thing, such as a product or political candidate. One possible response is to raise awareness for and resilience against such manipulative strategies through psychological inoculation. Two online experiments (total N=828N= 828 N = 828 ) demonstrated that a short, simple intervention prompting participants to reflect on an attribute of their own personality—by completing a short personality questionnaire—boosted their ability to accurately identify ads that were targeted at them by up to 26 percentage points. Accuracy increased even without personalized feedback, but merely providing a description of the targeted personality dimension did not improve accuracy. We argue that such a “boosting approach,” which here aims to improve people’s competence to detect manipulative strategies themselves, should be part of a policy mix aiming to increase platforms’ transparency and user autonomy

    Papers please: Predictive factors of national and international attitudes toward immunity and vaccination passports. Online representative surveys

    Get PDF
    BACKGROUND: In response to the COVID-19 pandemic, countries are introducing digital passports that allow citizens to return to normal activities if they were previously infected with (immunity passport) or vaccinated against (vaccination passport) SARS-CoV-2. To be effective, policy decision-makers must know whether these passports will be widely accepted by the public and under what conditions. This study focuses attention on immunity passports, as these may prove useful in countries both with and without an existing COVID-19 vaccination program; however, our general findings also extend to vaccination passports. OBJECTIVE: We aimed to assess attitudes toward the introduction of immunity passports in six countries, and determine what social, personal, and contextual factors predicted their support. METHODS: We collected 13,678 participants through online representative sampling across six countries—Australia, Japan, Taiwan, Germany, Spain, and the United Kingdom—during April to May of the 2020 COVID-19 pandemic, and assessed attitudes and support for the introduction of immunity passports. RESULTS: Immunity passport support was moderate to low, being the highest in Germany (775/1507 participants, 51.43%) and the United Kingdom (759/1484, 51.15%); followed by Taiwan (2841/5989, 47.44%), Australia (963/2086, 46.16%), and Spain (693/1491, 46.48%); and was the lowest in Japan (241/1081, 22.94%). Bayesian generalized linear mixed effects modeling was used to assess predictive factors for immunity passport support across countries. International results showed neoliberal worldviews (odds ratio [OR] 1.17, 95% CI 1.13-1.22), personal concern (OR 1.07, 95% CI 1.00-1.16), perceived virus severity (OR 1.07, 95% CI 1.01-1.14), the fairness of immunity passports (OR 2.51, 95% CI 2.36-2.66), liking immunity passports (OR 2.77, 95% CI 2.61-2.94), and a willingness to become infected to gain an immunity passport (OR 1.6, 95% CI 1.51-1.68) were all predictive factors of immunity passport support. By contrast, gender (woman; OR 0.9, 95% CI 0.82-0.98), immunity passport concern (OR 0.61, 95% CI 0.57-0.65), and risk of harm to society (OR 0.71, 95% CI 0.67-0.76) predicted a decrease in support for immunity passports. Minor differences in predictive factors were found between countries and results were modeled separately to provide national accounts of these data. CONCLUSIONS: Our research suggests that support for immunity passports is predicted by the personal benefits and societal risks they confer. These findings generalized across six countries and may also prove informative for the introduction of vaccination passports, helping policymakers to introduce effective COVID-19 passport policies in these six countries and around the world

    Resolving content moderation dilemmas between free speech and harmful misinformation

    Get PDF
    When moderating content online, two key values may come into conflict: protecting freedom of expression and preventing harm. Robust rules based in part on how citizens think about these moral dilemmas are necessary to deal with the unprecedented scale and urgency of this conflict in a principled way. Yet little is known about people’s judgments and preferences around content moderation. We examined such moral dilemmas in a conjoint survey experiment where respondents (N=2,564) indicated whether they would remove problematic social media posts on election denial, anti-vaccination, Holocaust denial, and climate change denial and whether they would take punitive action against the accounts. Respondents were shown key information about the user and their post, as well as the consequences of the misinformation. The majority preferred quashing harmful misinformation over protecting free speech. Respondents were more likely to remove posts and suspend accounts if the consequences were severe and if it was a repeated offence. Features related to the account itself (the person behind the account, their partisanship, and the number of followers) had little to no effect on respondents’ decisions. Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or Independents to delete posts or penalize the accounts that posted them. Our results can inform the design of transparent rules of content moderation for human and algorithmic moderators.Effective Protection of Fundamental Rights in a pluralist worl
    corecore