142 research outputs found

    Topics (Disinformation)

    Get PDF
    The topic variable is used in research on disinformation to analyze thematic differences in the content of false news, rumors, conspiracies, etc. Those topics are frequently based on national news agendas, i.e. producers of disinformation address current national or world events (e.g. elections, immigration, etc.) (Humprecht, 2019). Field of application/theoretical foundation: Topics are a central yet under-researched aspect of research on online disinformation (Freelon & Wells, 2020). The research interest is to find out which topics are taken up and spread by disinformation producers. The focus of this research is both on specific key topics for which sub-themes are identified (e.g. elections, climate change, Covid-19) and, more generally, on the question of which misleading content is disseminated (mostly on social media). Methodologically, the identification of topics is often a first step followed by further analysis of the content (Ferrara, 2017). Thus, the analysis of topics is linked to the detection of disinformation, which represents a methodological challenge. Topics can be identified inductively or deductively. Inductive analyses often use a data corpus, for example social media data, and try to identify topics using techniques such as topic modelling (e.g. Boberg et al., 2020). Deductive analyses frequently use topic lists to classify contents. Topics lists are initially created based on the literature on the respective topic or with the help of databases, e.g. by fact-checkers. References/combination with other methods of data collection: Studies on topics of disinformation use both manual and automated content analysis or combinations of both to investigate the occurrence of different topics in texts (Boberg et al., 2020; Bradshaw, Howard, Kollanyi, & Neudert, 2020). Inductive and deductive approaches have been combined with qualitative text analyses to identify topic categories which are subsequently coded (Humprecht, 2019; Marchal, Kollanyi, Neudert, & Howard, 2019). Example studies: Ferrara (2017); Humprecht (2019), Marchal et al. (2019)   Table 1. Summary of selected studies Author(s) Sample Values Reliability Ferrara (2017) Content type: Tweets Sampling period: April 27, 2017 to May 7, 201 Sample size: 16.65 million tweets Sampling: List of 23 key words and top 20 hashtags Keywords: France2017, Marine2017, AuNomDuPeuple, FrenchElection, FrenchElections, Macron, LePen, MarineLePen, FrenchPresidentialElection, JeChoisisMarine, JeVoteMarine, JeVoteMacron JeVote, Presidentielle2017, ElectionFracaise, JamaisMacron, Macron2017, EnMarche, MacronPresident Hashtags: #Macron, #Presidentielle2017, #fn, #JeVote, #LePen, #France, #2017LeDebat, #MacronLeaks, #Marine2017, #debat2017, #2017LeDébat, #MacronGate, #MarineLePen, #Whirlpool, #EnMarche, #JeVoteMacron, #MacronPresident, #JamaisMacron, #FrenchElection - Humprecht (2019) Content type: fact checks Outlet/ country: 2 fact checkers per country (AT, DE, UK, US) Sampling period: June 1, 2016 to September 30, 2017 Sample size: N=651 Unit of analysis: story/ fact-check No. of topics coded: main topic per fact-check Level of analysis: fact checks and fact-checker conspiracy theory, education, election campaign, environment, government/public administration (at the time when the story was published), health, immigration/integration, justice/crime, labor/employment, macroeconomics/economic regulation, media/journalism, science/ technology, war/terror, others Krippendorff’s alpha = 0.71 Marchal et al. (2019) Content type: tweets related to the European elections 2019 Sampling: hashtags in English, Catalan, French, German, Italian, Polish, Spanish, Swedish Sampling criteria: (1) contained at least one of the relevant hashtags; (2) contained the hashtag in the URL shared, or the title of its webpage; (3) were a retweet of a message that contained a relevant hashtag or mention in the original message; (4) were a quoted tweet referring to a tweet with a relevant hashtag or mention Sampling period: 5 April and 20 April, 2019 Sample size: 584,062 tweets from 187,743 unique users Religion Islam (Muslim, Islam, Hijab, Halal, Muslima, Minaret) Religion Christianity (Christianity, Church, Priest) Immigration (Asylum Seeker, Refugee, Migrants, Child Migrant, Dual Citizenship, Social Integration) Terrorism (ISIS, Djihad, Terrorism, Terrorist Attack) Political Figures/Parties (Vladimir Putin, Enrico Mezzetti, Emmanuel Macron, ANPI, Arnold van Doorn, Islamic Party for Unity, Nordic Resistance Movement) Celebrities (Lara Trump, Alba Parietti) Crime (Vandalism, Rape, Sexual Assault, Fraud, Murder, Honour Killing) Notre-Dame Fire (Notre-Dame Fire, Reconstruction) Political Ideology (Anti-Fascism, Fascism, Nationalism) Social Issues (Abortion, Bullying, Birth Rate) -   References Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020). Pandemic Populism: Facebook Pages of Alternative News Media and the Corona Crisis -- A Computational Content Analysis, 2019. Retrieved from http://arxiv.org/abs/2004.02566 Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L. M. (2020). Sourcing and Automation of Political News and Information over Social Media in the United States, 2016-2018. Political Communication, 37(2), 173–193. https://doi.org/10.1080/10584609.2019.1663322 Ferrara, E. (2017). Disinformation and social bot operations in the run up to the 2017 French presidential election. First Monday, 22(8). https://doi.org/10.5210/FM.V22I8.8005 Freelon, D., & Wells, C. (2020). Disinformation as Political Communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755 Humprecht, E. (2019). Where ‘fake news’ flourishes: a comparison across four Western democracies. Information Communication and Society, 22(13), 1973–1988. https://doi.org/10.1080/1369118X.2018.1474241 Marchal, N., Kollanyi, B., Neudert, L., & Howard, P. N. (2019). Junk News During the EU Parliamentary Elections?: Lessons from a Seven-Language Study of Twitter and Facebook. Oxford, UK. Retrieved from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/05/EU-Data-Memo.pd

    Types (Disinformation)

    Get PDF
    Disinformation can appear in various forms. Firstly, different formats can be manipulated, such as texts, images, and videos. Secondly, the amount and degree of falseness can vary, from completely fabricated content to decontextualized information to satire that intentionally misleads recipients. Therefore, the forms and format of disinformation might vary and differ not only between the supposedly clear categories of “true” and “false”. Field of application/theoretical foundation: Studies on types of disinformation are conducted in various fields, e.g. political communication, journalism studies, and media effects studies. Among other things, the studies identify the most common types of mis- or disinformation during certain events (Brennen, Simon, Howard, & Nielsen, 2020), analyze and categorize the behavior of different types of Twitter accounts (Linvill & Warren, 2020), and investigate the existence of serveral types of “junk news” in different national media landscapes (Bradshaw, Howard, Kollanyi, & Neudert, 2020; Neudert, Howard, & Kollanyi, 2019). References/combination with other methods of data collection: Only relatively few studies use combinations of methods. Some studies identify different types of disinformation via qualitative and quantitative content analyses (Bradshaw et al., 2020; Brennen et al., 2020; Linvill & Warren, 2020; Neudert et al., 2019). Others use surveys to analyze respondents’ concerns as well as exposure towards different types of mis- and disinformation (Fletcher, 2018). Example studies: Brennen et al. (2020); Bradshaw et al. (2020); Linvill and Warren (2020)   Information on example studies: Types of disinformation are defined by the presentation and contextualization of content and sometimes additionally by details (e.g. professionalism) about the communicator. Studies either deductively identify different types of disinformation (Brennen et al., 2020) by applying the theoretical framework by Wardle (2019), or additionally inductively identify and build different categories based on content analyses (Bradshaw et al., 2020; Linvill & Warren, 2020).   Table 1. Types of mis-/disinformation by Brennen et al. (2020) Category Specification Satire or parody - False connection Headlines, visuals or captions don’t support the content Misleading content Misleading use of information to frame an issue or individual, when facts/information are misrepresented or skewed False context Genuine content is shared with false contextual information, e.g. real images which have been taken out of context Imposter content Genuine sources, e.g. news outlets or government agencies, are impersonated Fabricated content Content is made up and 100% false; designed to deceive and do harm Manipulated content Genuine information or imagery is manipulated to deceive, e.g. deepfakes or other kinds of manipulation of audio and/or visuals Note. The categories are adapted from the theoretical framework by Wardle (2019). The coding instruction was: “To the best of your ability, what type of misinformation is it? (Select one that fits best.)” (Brennen et al., 2020, p. 12). The coders reached an intercoder reliability of a Cohen’s kappa of 0.82.   Table 2. Criteria for the “junk news” label by Bradshaw et al. (2020) Criteria Reference Specification Professionalism refers to the information about authors and the organization “Sources do not employ the standards and best practices of professional journalism, including information about real authors, editors, and owners” (pp. 174-175). “Distinct from other forms of user-generated content and citizen journalism, junk news domains satisfy the professionalism criterion because they purposefully refrain from providing clear information about real authors, editors, publishers, and owners, and they do not publish corrections of debunked information” (p. 176). Procedure: -        Systematically checked the about pages of domains: Contact information, information about ownership and editors, and other information relating to professional standards -        Reviewed whether the sources appeared in third-party fact-checking reports -        Checked whether sources published corrections of fact-checked reporting. Examples: zerohedge.com, conservative- fighters.org, deepstatenation.news Counterfeit refers to the layout and design of the domain itself “(…) [S]ources mimic established news reporting by using certain fonts, having branding, and employing content strategies. (…) Junk news is stylistically disguised as professional news by the inclusion of references to news agencies and credible sources as well as headlines written in a news tone with date, time, and location stamps. In the most extreme cases, outlets will copy logos and counterfeit entire domains” (p. 176). Procedure: -        Systematically reviewed organizational information about the owner and headquarters by checking sources like Wikipedia, the WHOIS database, and third-party fact-checkers (like Politico or MediaBiasFactCheck) -        Consulted country-specific expert knowledge of the media landscape in the US to identify counterfeiting websites. Examples: politicoinfo.com, NBC.com.co Style refers to the content of the domain as a whole “ (…) [S]tyle is concerned with the literary devices and language used throughout news reporting. (…) Designed to systematically manipulate users for political purposes, junk news sources deploy propaganda techniques to persuade users at an emotional, rather than cognitive, level and employ techniques that include using emotionally driven language with emotive expressions and symbolism, ad hominem attacks, misleading headlines, exaggeration, excessive capitalization, unsafe generalizations, logical fallacies, moving images and lots of pictures or mobilizing memes, and innuendo (Bernays, 1928; Jowette & O’Donnell, 2012; Taylor, 2003). (…) Stylistically, problematic sources will employ propaganda and clickbait techniques to varying degrees. As a result, determining style can be highly complex and context dependent” (p. 177). Procedure: -        Examined at least five stories on the front page of each news source in depth during the US presidential campaign in 2016 and the SOTU address in 2018 -        Checked the headlines of the stories and the content of the articles for literary and visual propaganda devices -        Considered as stylistically problematic if three of the five stories systematically exhibited elements of propaganda Examples: 100percentfedup.com, barenakedislam.com, theconservativetribune.com, dangerandplay.com Credibility refers to the content of the domain as a whole “(…) [S]ources rely on false information or conspiracy theories and do not post corrections” (p. 175). “[They] typically report on unsubstantiated claims and rely on conspiratorial and dubious sources. (…) Junk news sources that satisfy the credibility criterion frequently fail to vet their sources, do not consult multiple sources, and do not fact-check” (p. 178). Procedure: -        Examined at least five front page stories and reviewed the sources that were cited -        Reviewed pages to see if they included known conspiracy theories on issues such as climate change, vaccination, and “Pizzagate” -        Checked third-party fact-checkers for evidence of debunked stories and conspiracy theories Examples: infowars.com, endingthefed.com, thegatewaypundit.com, newspunch.com Bias refers to the content of the domain as a whole “(…) [H]yper-partisan media websites and blogs (…) are highly biased, ideologically skewed, and publish opinion pieces as news. Basing their stories on the same events, these sources manage to convey strikingly different impressions of what actually transpired. It is such systematic differences in the mapping from facts to news reports that we call bias. (…) Bias exists on both sides of the political spectrum. Like determining style, determining bias can be highly complex and context dependent” (pp. 177-178). Procedure: -        Checked third-party sources that systematically evaluate media bias -        If the domain was not evaluated by a third party, the authors examined the ideological leaning of the sources used to support stories appearing on the domain -        Evaluation of the labeling of politicians (are there differences between the left and the right?) -        Identified bias created through the omission of unfavorable facts, or through writing that is falsely presented as being objective Examples on the right: breitbart.com, dailycaller.com, infowars.com, truthfeed.com Examples on the left: occupydemocrats.com, addictinginfo.com, bipartisanreport.com Note. The coders reached an intercoder reliability of a Krippendorff’s kappa of 0.89. The label of “junk news” is defined by fulfilling at least three of the five criteria. It refers to sources that deliberately publish misleading, deceptive, or incorrect information packaged as real news.   Table 3. Identified types of IRA-associated Twitter accounts by Linvill and Warren (2020) Category Specification Right troll “Twitter-handles broadcast nativist and right-leaning populist messages. These handles’ themes were distinct from mainstream Republicanism. (…) They rarely broadcast traditionally important Republican themes, such as taxes, abortion, and regulation, but often sent divisive messages about mainstream and moderate Republicans. (…) The overwhelming majority of handles, however, had limited identifying information, with profile pictures typically of attractive, young women” (p. 5). Hashtags frequently used by these accounts: #MAGA (i.e., “Make America Great Again,”), #tcot (i.e. “Top Conservative on Twitter), #AmericaFirst, and #IslamKills Left troll “These handles sent socially liberal messages, with an overwhelming focus on cultural identity. (…) They discussed gender and sexual identity (e.g., #LGBTQ) and religious identity (e.g., #MuslimBan), but primarily focused on racial identity. Just as the Right Troll handles attacked mainstream Republican politicians, Left Troll handles attacked mainstream Democratic politicians, particularly Hillary Clinton. (…) It is worth noting that this account type also included a substantial portion of messages which had no clear political motivation” (p. 6). Hashtags frequently used by these accounts: #BlackLivesMatter, #PoliceBrutality, and #BlackSkinIsNotACrime Newsfeed “These handles overwhelmingly presented themselves as U.S. local news aggregators and had descriptive names (…). These accounts linked to legitimate regional news sources and tweeted about issues of local interest (…). A small number of these handles, (…) tweeted about global issues, often with a pro-Russia perspective” (p. 6). Hashtags frequently used by these accounts: #news, #sports, and #local Hashtag gamer “These handles are dedicated almost entirely to playing hashtag games, a popular word game played on Twitter. Users add a hashtag to a tweet (e.g., #ThingsILearnedFromCartoons) and then answer the implied question. These handles also posted tweets that seemed organizational regarding these games (…). Like some tweets from Left Trolls, it is possible such tweets were employed as a form of camouflage, as a means of accruing followers, or both. Other tweets, however, often using the same hashtag as mundane tweets, were socially divisive (…)” (p. 7). Hashtags frequently used by these accounts: #ToDoListBeforeChristmas, #ThingsYouCantIgnore, #MustBeBanned, and #2016In4Words Fearmonger “These accounts spread disinformation regarding fabricated crisis events, both in the U.S. and abroad. Such events included non-existent outbreaks of Ebola in Atlanta and Salmonella in New York, an explosion at the Columbian Chemicals plan in Louisiana, a phosphorus leak in Idaho, as well as nuclear plant accidents and war crimes perpetrated in Ukraine. (…) These accounts typically tweeted a great deal of innocent, often frivolous content (i.e. song lyrics or lines of poetry) which were potentially automated. With this content these accounts often added popular hashtags such as #love (…) and #rap (…). These accounts changed behavior sporadically to tweet disinformation, and that output was produced using a different Twitter client than the one used to produce the frivolous content. (…) The Fearmonger category was the only category where we observed some inconsistency in account activity. A small number of handles tweeted briefly in a manner consistent with the Right Troll category but switched to tweeting as a Fearmonger or vice-versa” (p. 7). Hashtags frequently used by these accounts: #Fukushima2015 and #ColumbianChemicals Note. The categories were identified qualitatively analyzing the content produced and were then refined and explored more detailed via a quantitative analysis. The coders reached a Krippendorff’s alpha intercoder-reliability of 0.92.   References Bradshaw, S., Howard, P. N., Kollanyi, B., & Neudert, L.?M. (2020). Sourcing and automation of political news and information over social media in the United States, 2016-2018. Political Communication, 37(2), 173–193. Brennen, J. S., Simon, F. M., Howard, P. N. [P. N.], & Nielsen, R. K. (2020). Types, sources, and claims of covid-19 misinformation. Reuters Institute. Retrieved from http://www.primaonline.it/wp-content/uploads/2020/04/COVID-19_reuters.pdf Fletcher, R. (2018). Misinformation and disinformation unpacked. Reuters Institute. Retrieved from http://www.digitalnewsreport.org/survey/2018/misinformation-and-disinformation-unpacked/ Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 1–21. Neudert, L.?M., Howard, P., & Kollanyi, B. (2019). Sourcing and automation of political news and information during three European elections. Social Media + Society, 5(3). https://doi.org/10.1177/2056305119863147 Wardle, C. (2019). First Draft's essential guide to understanding information disorder. UK: First Draft News. Retrieved from https://firstdraftnews.org/wp-content/uploads/2019/10/Information_Disorder_Digital_AW.pdf?x7670

    Publishers/sources (Disinformation)

    Get PDF
    Recent research has mainly used two approaches to identify publishers or sources of disinformation: First, alternative media are identified as potential publishers of disinformation. Second, potential publishers of disinformation are identified via fact-checking websites. Samples created using those approaches can partly overlap. However, the two approaches differ in terms of validity and comprehensiveness of the identified population. Sampling of alternative media outlets is theory-driven and allows for cross-national comparison. However, researchers face the challenge to identify misinforming content published by alternative media outlets. In contrast, fact-checked content facilitates the identification of a given disinformation population; however, fact-checker often have a publication bias focusing on a small range of (elite) actors or sources (e.g. individual blogs, hyper partisan news outlets, or politicians). In both approaches it is important to describe, compare and, if possible, assign the outlets to already existing categories in order to enable a temporal and spatial comparison. Approaches to identify sources/publishers: Besides the operationalization of specific variables analyzed in the field of disinformation, the sampling procedure presents a crucial element to operationalize disinformation itself. Following the approach of detecting disinformation through its potential sources or publishers (Li, 2020), research analyzes alternative media (Bachl, 2018; Boberg, Quandt, Schatto-Eckrodt, & Frischlich, 2020; Heft et al., 2020) or identifies a various range of actors or domains via fact-checking sites (Allcott & Gentzkow, 2017; Grinberg et al., (2019); Guess, Nyhan & Reifler, 2018). Those two approaches are explained in the following. Alternative media as sources/publishers The following procedure summarizes the approaches used in current research for the identification of relevant alternative media outlets (following Bachl, 2018; Boberg et al., 2020; Heft et al., 2020). Snowball sampling to define the universe of alternative media outlets may consists of the following steps: Sample of outlets identified in previous research Consultation of search engines and news articles Departing from a potential prototype, websites provide information about digital metrics (Alexa.com or Similarweb.com). For example, Similarweb.com shows three relevant lists per outlet: “Top Referring Sites” (which websites are sending traffic to this site), “Also visited websites” (overlap with users of other websites), and “Competitors & Similar Sites” (similarity defined by the company) Definition of alternative media outlets Journalistic outlets (for example, excluding blogs and forums) with current, non-fictional and regular content Self-description of the outlets in a so-called “about us” section or in a mission statement, which underlines the relational perspective of being an alternative to the mainstream media. This description may for example include keywords such as alternative, independent, unbiased, critical or is in line with statements like “presenting the real/true views/facts” or “covering what the mainstream media hides/leaves out”. Use of predefined dimensions and categories of alternative media (Frischlich, Klapproth, & Brinkschulte, 2020; Holt, Ustad Figenschou, & Frischlich, 2019) Sources/publishers via fact-checking sites Following previous research in the U.S., Guess et al. (2018) identified “Fake news domains” (focusing on pro-Trump and pro-Clinton content) which published two or more articles that were coded as “fake news” by fact-checkers (derived from Allcott & Gentzkow, 2017). Grinberg et al. (2019) identified three classes of “fake news sources” differentiated by severity and frequency of false content (see Table 1). These three sources are part of a total of six website labels. The researchers additionally coded the sites into reasonable journalism, low quality journalism, satire and sites that were not applicable. The coders reached a percentual agreement of 60% for the labeling of the six categories, and 80% for the distinction of fake and non-fake categories.   Table 1. Three classes of “fake news sources” by Grinberg et al. (2019) Label Specification Identification Definition Black domains Based on previous studies: These domains published at least two articles which were declared as “fake news” by fact-checking sites. Based on preexisting lists constructed by fact-checkers, journalists and academics (Allcott & Gentzkow, 2017; Guess et al., 2018) Almost exclusively fabricated stories Red domains Major or frequent falsehoods that are in line with the site's political agenda. Prejudiced: Site presents falsehoods that focus upon one group with regards to race / religion / ethnicity / sexual orientation. Major or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Falsehoods that clearly reflected a flawed editorial process Orange domains Moderate or occasional falsehoods to advance political agenda. Sensationalism: exaggerations to the extent that the article becomes misleading and inaccurate. Occasionally prejudiced articles: Site at times presents individual articles that contain falsehoods regarding race / religion / ethnicity / sexual orientation Openly states that the site may not be inaccurate, fake news, or cannot be trusted to provide factual news. Moderate or frequent falsehoods with little regard for the truth, but not necessarily to advance a certain political agenda. Conspiratorial: explanations of events that involves unwarranted suspicion of government cover ups or supernatural agents. By the fact-checker snopes.com as sources of questionable claims; then manually differentiated between red and orange domains Negligent and deceptive information but are less systemically flawed   Supplementary materials: https://science.sciencemag.org/content/sci/suppl/2019/01/23/363.6425.374.DC1/aau2706_Grinberg_SM.pdf (S5 and S6) Coding scheme and source labels: https://zenodo.org/record/2651401#.XxGtJJgzaUl (LazerLab-twitter-fake-news-replication-2c941b8\domains\domain_coding\data)   References Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. Bachl, M. (2018). (Alternative) media sources in AfD-centered Facebook discussions. Studies in Communication | Media, 7(2), 256–270. Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions. Digital Journalism, 6(2), 154–175. Boberg, S., Quandt, T., Schatto-Eckrodt, T., & Frischlich, L. (2020, April 6). Pandemic populism: Facebook pages of alternative news media and the corona crisis -- A computational content analysis. Retrieved from http://arxiv.org/pdf/2004.02566v3 Farkas, J., Schou, J., & Neumayer, C. (2018). Cloaked Facebook pages: Exploring fake Islamist propaganda in social media. New Media & Society, 20(5), 1850–1867. Frischlich, L., Klapproth, J., & Brinkschulte, F. (2020). Between mainstream and alternative – Co-orientation in right-wing populist alternative news media. In C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr (Eds.), Lecture Notes in Computer Science. Disinformation in open online media (Vol. 12021, pp. 150–167). Cham: Springer International Publishing. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. Presidential election. Science (New York, N.Y.), 363(6425), 374–378. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1). https://doi.org/10.1126/sciadv.aau4586 Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 US presidential campaign. European Research Council, 9(3), 1–14. Heft, A., Mayerhöffer, E., Reinhardt, S., & Knüpfer, C. (2020). Beyond Breitbart: Comparing right?wing digital news infrastructures in six Western democracies. Policy & Internet, 12(1), 20–45. Holt, K., Ustad Figenschou, T., & Frischlich, L. (2019). Key dimensions of alternative news media. Digital Journalism, 7(7), 860–869. Nelson, J. L., & Taneja, H. (2018). The small, disloyal fake news audience: The role of audience availability in fake news consumption. New Media & Society, 20(10), 3720–3737

    Gastric intramucosal pH-guided therapy in patients after elective repair of infrarenal abdominal aneurysms: is it beneficial?

    Get PDF
    Objective: To determine if gastric intramucosal pH (pHi)-guided therapy reduces the number of complications and length of stay in the intensive care unit (ICU) or the hospital after elective repair of infrarenal abdominal aortic aneurysms. Design: Prospective, randomized study. Setting: Surgical intensive care unit (SICU) of a University Hospital. Patients: Fifty-five consecutive patients randomized to group 1 (pHi-guided therapy) or to group 2 (control). Interventions: Patients of group 1 with a pHi of lower than 7.32 were treated by means of a prospective protocol in order to increase their pHi to 7.32 or more. Measurements and results: pHi was determined in both groups on admission to the SICU and thereafter at 6-h intervals. In group 2, the treating physicians were blinded for the pHi values. Complications, APACHE II scores, duration of endotracheal intubation, fluid and vasoactive drug treatment, treatment with vasoactive drugs, length of stay in the SICU and in the hospital and hospital mortality were recorded. There were no differences between groups in terms of the incidence of complications. We found no differences in APACHE II scores on admission, the duration of intubation, SICU or hospital stay, or hospital mortality. In the two groups the incidence of pHi values lower than 7.32 on admission to the SICU was comparable (41 % and 42 % in groups 1 and 2, respectively). Patients with pHi lower than 7.32 had more major complications during SICU stay (p<0.05), and periods more than 10 h of persistently low pHi values (< 7.32) were associated with a higher incidence of SICU complications (p<0.01). Conclusions: Low pHi values (<7.32) and their persistence are predictors of major complications. Treatment to elevate low pHi values does not improve postoperative outcome. Based on these data, we cannot recommend the routine use of gastric tonometers for pHi-guided therapy in these patients. Further studies are warranted to determine adequate treatment of low pHi values that results in beneficial effects on the patient's postoperative course and outcom

    Identifying the Drivers Behind the Dissemination of Online Misinformation: A Study on Political Attitudes and Individual Characteristics in the Context of Engaging With Misinformation on Social Media

    Full text link
    The increasing dissemination of online misinformation in recent years has raised the question which individuals interact with this kind of information and what role attitudinal congruence plays in this context. To answer these questions, we conduct surveys in six countries (BE, CH, DE, FR, UK, and US) and investigate the drivers of the dissemination of misinformation on three noncountry specific topics (immigration, climate change, and COVID-19). Our results show that besides issue attitudes and issue salience, political orientation, personality traits, and heavy social media use increase the willingness to disseminate misinformation online. We conclude that future research should not only consider individual’s beliefs but also focus on specific user groups that are particularly susceptible to misinformation and possibly caught in social media “fringe bubbles.

    Systemische Eigendynamik und politische Steuerungsversuche: das Beispiel der kardiologischen/ kardiochirurgischen Versorgung

    Full text link
    "Die kardiologische/kardiochirurgische Versorgung ist ein technisch hochgradig durchdrungener Leistungsbereich des Gesundheitswesens, der sich durch eine ausgeprägt expansive Dynamik auszeichnet. Im Zuge dieser Entwicklung sind die Herzzentren als spezialisierte Versorgungseinrichtungen entstanden. 61 dieser Zentren sind bislang gegründet worden; weitere Gründungen stehen bevor. Angestiegen ist auch die Zahl der niedergelassenen Kardiologen. Dieses Wachstum der Angebotskapazitäten war mit einer Diversifikation des Leistungsangebots und einer zum Teil immensen Steigerung der Leistungszahlen im diagnostischen wie im therapeutischen Bereich verbunden. Mit der PTCA, einer invasiven kardiologischen Behandlungstechnik, konnte sich ein neues Verfahren auf breiter Front durchsetzen. Die PTCA kann in vielen Fällen als Alternative zur Bypass-Operation angesehen werden. Ihre Substitutionseffekte blieben gleichwohl weit hinter mancher Erwartung zurück; trotz einer rapiden Zunahme des PTCA-Volumens stiegen auch die Operationszahlen kontinuierlich an. 1985 legte die Konferenz der Gesundheitsminister der Länder einen Richtwert von 400 Herzoperationen mit Herz-Lungenmaschine pro 1 Mill. Einwohner fest. Bereits wenige Jahre später wurde eine Anhebung der Richtgröße auf 500 bis maximal 700 Operationen empfohlen. Mittlerweile werden im Bundesdurchschnitt (alte Länder) 795 Operationen pro 1 Mill. Einwohner durchgeführt. Der Ausbau der Operationskapazitäten macht eine weitere Steigerung bis auf 950 Operationen möglich. Ziel unseres Beitrags ist es, Funktionsweise und innere Dynamik dieses Segments des Gesundheitssystems (das Zusammenspiel der Expansionsfaktoren und Differenzierungsfaktoren) zu analysieren. Es hat den Anschein, daß im medizinischen Handlungsfeld selbst kaum limitierende Faktoren wirksam sind. Die Expansion wird primär durch politische Interventionen gedämpft und konditioniert. Wir möchten die Interaktion zwischen politischen Steuerungsversuchen (Kapazitätenplanung der Länder, Gesundheitsstrukturgesetz) und systematischen Eigendynamiken rekonstruieren. Dabei stützen wir uns auf differenzierungstheoretische und steuerungstheoretische Ansätze, die sich darum bemühen, Strukturmerkmale und Akteurmerkmale aufeinander zu beziehen und eine Mehr-Ebenen-Modellierung sozialer Zusammenhänge gestatten. Zwischenergebnisse eines empirischen Forschungsprojekts, das sich mit der Versorgung von Herzpatienten in zwei Regionen der Bundesrepublik befaßt, werden für Zwecke der Darstellung und Analyse von Versorgungsstrukturen herangezogen." (Autorenreferat

    Physiological stress of intracellular Shigella flexneri visualized with a metabolic sensor fused to a surface-reporter system

    Get PDF
    AbstractWhen deleted of its N-terminal signal-reception domain, the broad host range σ54-dependent transcriptional regulator XylR, along with its cognate promoter Pu, becomes a sensor of the metabolic stress of the carrier bacteria. We have employed a surface reporter system to visualize the physiological status of intracellular Shigella flexneri during infection of Henle 407 cells in culture. To this end, the xylRΔA gene has been engineered adjacent to a bicistronic transcriptional fusion of Pu to a lamB variant tagged with a short viral sequence (cor) and β-galactosidase (lacZ). The accessibility of the cor epitope to the externalmost medium and the expression of Pu in the bacterial population was confirmed, respectively, with immunomagnetic beads and the sorting of Escherichia coli cells treated with a fluorescent antibody. Intracellular Shigella cells expressed the Pu–lamB/cor–lacZ reporter at high levels, suggesting that infectious cells endure a considerable metabolic constraint during the invasion process

    A Prospective, Randomized, Double-blind, Vehicle-controlled, Multi-centre Clinical Trial of Efficacy, Safety and Local Tolerability

    Get PDF
    This study was a prospective, parallel-group, randomized, double-blind, vehicle-controlled, multi-centre clinical trial to compare the efficacy of topical sertaconazole 2% cream with vehicle in reducing chronic pruritus in subjects with atopic dermatitis, and to assess its safety and local tolerability. A total of 70 subjects applied either of the 2 treatments twice daily for a period of 4 weeks on affected, itchy skin areas. Treatment efficacy was evaluated primarily considering the item itch intensity on a 5-point verbal rating scale. Insomnia, state of atopic dermatitis (Scoring Atopic Dermatitis; SCORAD), quality of life and therapy benefit were also assessed. No significant difference between active treatment and vehicle was found at any of the time-points for any of the investigated parameters. Under the experimental conditions of the study, sertaconazole 2% cream did not exert anti-pruritic effects that were better than vehicle in subjects with atopic dermatitis who had chronic pruritus. Trial registration ClinicalTrials.gov #NCT01792713

    Иммуногистохимическая детекция каппа-опиоидных рецепторов в коже человека

    Get PDF
    The imbalance of p- and kappa-opioid receptors in the skin or central nervous system is currently deemed to be one of the reasons of chronic pruritus. A number of studies demonstrated a positive effect of system agonists of kappa-opioid receptors in the treatment of uremic pruritus, nodular pruritus, paraneoplastic and cholestatic pruritus. This research demonstrates an expression of kappa-opioid receptors in human skin (basal keratinocytes, dendritic cells, epidermal melanocytes and fibroblasts of the upper dermis) detected with the use of different immunochemistry methods.В качестве одной из причин хронического зуда в настоящее время предполагается дисбаланс системы мю- и каппа-опиоидных рецепторов в коже или в центральной нервной системе. В нескольких исследованиях продемонстрирован положительный эффект системных агонистов каппа-опиоидных рецепторов в лечении уремического зуда, узловатой почесухи, паранеопластического и холестатического зуда. В настоящей работе демонстрируется экспрессия каппа-опиоидных рецепторов в коже человека (базальных кератиноцитах, дендритных клетках, меланоцитах эпидермиса и фибробластах сосочкового слоя дермы), определенная с помощью различных иммуногистохимических методик

    National critical incident reporting systems relevant to anaesthesia: a European survey

    Get PDF
    Background Critical incident reporting is a key tool in the promotion of patient safety in anaesthesia. Methods We surveyed representatives of national incident reporting systems in six European countries, inviting information on scope and organization, and intelligence on factors determining success and failure. Results Some systems are government-run and nationally conceived; others started out as small, specialty-focused initiatives, which have since acquired a national reach. However, both national co-ordination and specialty enthusiasts seem to be necessary for an optimally functioning system. The role of reporting culture, definitional issues, and dissemination is discussed. Conclusions We make recommendations for others intending to start new systems and speculate on the prospects for sharing patient safety lessons relevant to anaesthesia at European leve
    corecore