1,370 research outputs found

    Reactive Public Relations Strategies for Managing Fake News in the Online Environment

    Get PDF
    The aim of this conceptual paper is to discuss the issue of managing fake news in the online environment, from an organizational perspective, by using reactive PR strategies. First, we critically discuss the most important definitions of the umbrella term fake news, in the so-called post-truth era, in order to emphasize different challenges in conceptualizing this elusive social phenomenon. Second, employing some valuable contribution from literature, we present and illustrate with vivid examples 10 categories of fake news. Each type of fake news is discussed in the context of organizational communication. Based on existent literature, we propose a 3D conceptual model of fake news, in an organizational context. Furthermore, we consider that PR managers can use either reactive PR strategies to counteract online fake news regarding an organization, or communication stratagems to temporarily transform the organization served into a potential source of fake news. The existing typology of reactive public relations strategies from the literature allow us to discuss the challenge of using them in counteracting online fake news. Each reactive PR strategy can be a potential solution to respond to different types of online fake news. Although these possibilities seem to be extensive, in some cases, PR managers can find them ineffective. In our view, this cluster of reactive PR strategies is not a panacea for managing fake news in the online environment and different strategic approaches may be need, such as communication stratagems. In this context, communication stratagems consist in using organization as a source or as a vector for strategic creation and dissemination of online fake news, for the benefit of the organization. We conclude that within online environment PR managers can employ a variety of reactive PR strategies to counteract fake news, or different communication stratagems to achieve organizational goals

    Is COVID-19 an ‘ordinary flu’ that benefits politicians? Perception of pandemic disinformation in Latvia

    Get PDF
    This study examines society’s susceptibility to COVID-19-related disinformation in Latvia, linking it to self-evaluation of the perceived COVID-19 health risks. The main research questions are: “How do Latvians experience disinformation about COVID-19?”; “How does this experience relate to different degrees of perceived disease risks?”. A nationally representative survey was conducted in September 2020, reaching 1,013 of Latvia’s residents aged 18 to 75. More than half of the respondents (54%) have encountered misleading or false information; 30% thought that “the COVID-19-related chaos is beneficial to politicians”, while 17% believed that “COVID-19 is like flu”. Respondents with a higher level of education and more active media usage habits are more likely to recognise disinformation about COVID-19. Moreover, this skill is linked to a higher degree of perceived threat of the disease. Yet, those who rate their risk of disease as very high, alongside those who rate their risk of disease as low and unreal, are ‘infodemically’ vulnerable – more susceptible to disinformation, false news, and conspiracy theories. Recommendations to communicators about curbing the diffusion of disinformation and diminishing its impact are provided.publishersversionPeer reviewe

    Misinformation in Encounters : A Qualitative Study of Misinformation as a Social Phenomenon

    Get PDF
    Current research tends to see misinformation as a negative type of information in online environments, and fact-checking and improved information literacy are seen as solutions to the problem of misinformation. Considering misinformation only from this viewpoint is problematic because it does not consider misinformation as a type of information in our everyday information environment. The aim of this thesis is to broaden the understanding of misinformation as a nuanced concept and as a social and situated phenomenon affected by different factors. Encounters are used as means of clarifying misinformation. New knowledge of misinformation is needed to better address it and problems with it in different contexts and situations. This thesis adopts the definition of misinformation as inaccurate, incomplete, vague, or ambiguous information that is affected by social, cultural, historical, contextual, and situational factors. It studies the misinformation people encounter in their everyday lives, what factors affect it (specifically, what role encounters play in this process), how misinformation can be studied, and how to manage misinformation more efficiently. These questions were studied in the context of support with information (i.e. holistic ways to help people access, use, and understand information) and, more specifically, in two contexts where such support is given: asylum seekers supported by volunteers and youth supported by youth services. In these contexts, misinformation may be extremely challenging, but simply providing accurate information without considering factors surrounding misinformation is inadequate, and suitable ways of providing and discussing information should be developed. Misinformation was studied indirectly through interviews with people who provided support with information (i.e. volunteers and youth service workers). The analysis of the interview discussions contributed to the qualitative methodological approaches to studying misinformation. Both direct questions and indirect discussions on misinformation were found to be important for eliciting rich data. The empirical findings revealed different types of misinformation connected with authorities and official structures (outdated, incomplete, or conflicting information and perceived intimidation). Different strategies can be used when giving support with information to make misinformation less challenging, the most important of which is to encounter all people with respect and as human beings when supporting their access to and understanding of information. The research findings highlighted the importance of encounters. The framework for caring encounter was used for analysing the social factors that influence misinformation. Caring encounters mitigate misinformation, whereas uncaring encounters or a complete lack of encounters make it challenging for people to access, understand, and use information. The research findings can be used to improve information support and services by addressing factors surrounding misinformation. Misinformation is, thus, a social construct that should be placed in the wider context of information and seen as an unavoidable part of our information environment.Misinformation ses oftast inom aktuell forskning som en negativ typ av information pÄ internet, och faktagranskning och bÀttre informationskompetens ses som lösningar till misinformation. Att se pÄ misinformation enbart ur denna synvinkel Àr problematiskt eftersom dÄ förstÄs misinformation inte som en del av vÄr vardagliga informationsmiljö. Syftet med denna avhandling Àr att förstÄ misinformation som ett nyanserat begrepp och socialt fenomen som pÄverkas av olika faktorer. Möten anvÀnds som en faktor för att klargöra misinformation. Ny kunskap om misinformation behövs för att bÀttre förstÄ och lösa de problem som uppstÄr i olika kontexter och situationer dÀr misinformation förekommer. Denna avhandling anvÀnder definitionen av misinformation som felaktig, ofullstÀndig, oklar och mÄngtydig information som pÄverkas av sociala, kulturella, historiska, kontextuella och situationsbundna faktorer. Det undersöks hurdan misinformation mÀnniskor kommer i kontakt med i sin vardag, vilka faktorer som pÄverkar misinformationen och mera specifikt, hurdan roll bemötande har i den processen. Vidare fokuserar avhandlingen pÄ hur misinformation kan studeras och vad man kan göra Ät den. Dessa frÄgor forskas i kontexten av stöd med information (holistiska sÀtt att hjÀlpa andra med tillgÄng, förstÄelse och anvÀndning av information), som bestÄr av ytterligare tvÄ sammanhang: asylsökande som stöds av volontÀrer och unga som stöds av ungdomsservice. Misinformation kan vara ett stort problem i dessa sammanhang, men det Àr inte tillrÀckligt att enbart ge rÀtt information utan den mÄste ges och diskuteras pÄ ett för mottagaren lÀmpligt sÀtt, dvs. lÀggas in i en större kontext. I denna avhandling studerades misinformation indirekt via mÀnniskor som ger stöd med information, dvs. volontÀrer och ungdomsservicearbetare. Genom att analysera diskussionen i intervjuerna, bidrog denna studie till den kvalitativa forskningen om misinformation. BÄde direkta frÄgor och indirekt diskussion behövs för att samla in mÄngsidiga data. De empiriska resultaten visade att det finns olika typer av misinformation i samband med myndigheter och officiella strukturer: förÄldrad, ofullstÀndig och motstridig information samt upplevt hot. Det finns olika strategier som kan anvÀndas för att lindra problemen med misinformation. Viktigast Àr att bemöta en mÀnniska med respekt för att stöda hens tillgÄng till och förstÄelse av information. Resultaten i denna avhandling visade hur viktiga möten Àr. Ramverket för vÄrdande möte anvÀndes för att analysera de sociala faktorer som definierar misinformation. Ett vÄrdande möte kan göra det lÀttare att hantera misinformation medan icke-vÄrdande möten och brist pÄ möten överlag försÀmrar mÀnniskors möjlighet att nÄ, förstÄ och anvÀnda information. Resultaten kan anvÀndas för att utveckla stöd med information och informationstjÀnster genom att sÀrskilt betona faktorer som pÄverkar misinformation. Misinformation Àr ett socialt begrepp som borde lÀggas in i en större sammanhang och ses som en oundviklig och naturlig del av vÄr informationsmiljö

    Online misinformation about climate change

    Get PDF
    This is the final version. Available from the publisher via the DOI in this record.Policymakers, scholars, and practitioners have all called attention to the issue of misinformation in the climate change debate. But what is climate change misinformation, who is involved, how does it spread, why does it matter, and what can be done about it? Climate change misinformation is closely linked to climate change skepticism, denial, and contrarianism. A network of actors are involved in financing, producing, and amplifying misinformation. Once in the public domain, characteristics of online social networks, such as homophily, polarization, and echo chambers—characteristics also found in climate change debate—provide fertile ground for misinformation to spread. Underlying belief systems and social norms, as well as psychological heuristics such as confirmation bias, are further factors which contribute to the spread of misinformation. A variety of ways to understand and address misinformation, from a diversity of disciplines, are discussed. These include educational, technological, regulatory, and psychological-based approaches. No single approach addresses all concerns about misinformation, and all have limitations, necessitating an interdisciplinary approach to tackle this multifaceted issue. Key research gaps include understanding the diffusion of climate change misinformation on social media, and examining whether misinformation extends to climate alarmism, as well as climate denial. This article explores the concepts of misinformation and disinformation and defines disinformation to be a subset of misinformation. A diversity of disciplinary and interdisciplinary literature is reviewed to fully interrogate the concept of misinformation—and within this, disinformation—particularly as it pertains to climate change. This article is categorized under:. Perceptions, Behavior, and Communication of Climate Change > Communication.Economic and Social Research Council (ESRC

    USING TRANSACTION COST ECONOMICS SAFEGUARDING TO REDUCE THE DIFFUSION OF DISINFORMATION ON SOCIAL MEDIA

    Get PDF
    Human users contribute to the spread of disinformation on Social Media. To reduce the spread, we apply Transaction Cost Economics (TCE) Safeguarding, which penalises the sharing of disinformation. Using the economic theory TCE positions Social Media platforms as free markets, in which actors are motivated to protect their assets and peer reputation. We conducted a study exploring TCE Safeguarding as a market correction mechanism to change the disinformation diffusion behaviour of users. Our findings show that users will be less likely to post a comment and more likely to correct their previous disinformation diffusion actions when TCE Safeguarding is applied. Focusing on Social Media as a market rather than its individual components may provide a mechanism to address the fake news phenomenon

    USING TRANSACTION COST ECONOMICS SAFEGUARDING TO REDUCE THE DIFFUSION OF DISINFORMATION ON SOCIAL MEDIA

    Get PDF
    Human users contribute to the spread of disinformation on Social Media. To reduce the spread, we apply Transaction Cost Economics (TCE) Safeguarding, which penalises the sharing of disinformation. Using the economic theory TCE positions Social Media platforms as free markets, in which actors are motivated to protect their assets and peer reputation. We conducted a study exploring TCE Safeguarding as a market correction mechanism to change the disinformation diffusion behaviour of users. Our findings show that users will be less likely to post a comment and more likely to correct their previous disinformation diffusion actions when TCE Safeguarding is applied. Focusing on Social Media as a market rather than its individual components may provide a mechanism to address the fake news phenomenon

    Resilience of Society to Recognize Disinformation: Human and/or Machine Intelligence

    Get PDF
    The paper conceptualizes the societal impacts of disinformation in hopes of developing a computational approach that can identify disinformation in order to strengthen social resilience. An innovative approach that considers the sociotechnical interaction phenomena of social media is utilized to address and combat disinformation campaigns. Based on theoretical inquiries, this study proposes conducting experiments that capture subjective and objective measures and datasets while adopting machine learning to model how disinformation can be identified computationally. The study particularly will focus on understanding communicative social actions as human intelligence when developing machine intelligence to learn about disinformation that is deliberately misleading, as well as the ways people judge the credibility and truthfulness of information. Previous experiments support the viability of a sociotechnical approach, i.e., connecting subtle language-action cues and linguistic features from human communication with hidden intentions, thus leading to deception detection in online communication. The study intends to derive a baseline dataset and a predictive model and by that to create an information system artefact with the capability to differentiate disinformation
    • 

    corecore