1,724 research outputs found

    Fact Checking in Community Forums

    Full text link
    Community Question Answering (cQA) forums are very popular nowadays, as they represent effective means for communities around particular topics to share information. Unfortunately, this information is not always factual. Thus, here we explore a new dimension in the context of cQA, which has been ignored so far: checking the veracity of answers to particular questions in cQA forums. As this is a new problem, we create a specialized dataset for it. We further propose a novel multi-faceted model, which captures information from the answer content (what is said and how), from the author profile (who says it), from the rest of the community forum (where it is said), and from external authoritative sources of information (external support). Evaluation results show a MAP value of 86.54, which is 21 points absolute above the baseline.Comment: AAAI-2018; Fact-Checking; Veracity; Community-Question Answering; Neural Networks; Distributed Representation

    Health Misinformation in Search and Social Media

    Get PDF
    People increasingly rely on the Internet in order to search for and share health-related information. Indeed, searching for and sharing information about medical treatments are among the most frequent uses of online data. While this is a convenient and fast method to collect information, online sources may contain incorrect information that has the potential to cause harm, especially if people believe what they read without further research or professional medical advice. The goal of this thesis is to address the misinformation problem in two of the most commonly used online services: search engines and social media platforms. We examined how people use these platforms to search for and share health information. To achieve this, we designed controlled laboratory user studies and employed large-scale social media data analysis tools. The solutions proposed in this thesis can be used to build systems that better support people's health-related decisions. The techniques described in this thesis addressed online searching and social media sharing in the following manner. First, with respect to search engines, we aimed to determine the extent to which people can be influenced by search engine results when trying to learn about the efficacy of various medical treatments. We conducted a controlled laboratory study wherein we biased the search results towards either correct or incorrect information. We then asked participants to determine the efficacy of different medical treatments. Results showed that people were significantly influenced both positively and negatively by search results bias. More importantly, when the subjects were exposed to incorrect information, they made more incorrect decisions than when they had no interaction with the search results. Following from this work, we extended the study to gain insights into strategies people use during this decision-making process, via the think-aloud method. We found that, even with verbalization, people were strongly influenced by the search results bias. We also noted that people paid attention to what the majority states, authoritativeness, and content quality when evaluating online content. Understanding the effects of cognitive biases that can arise during online search is a complex undertaking because of the presence of unconscious biases (such as the search results ranking) that the think-aloud method fails to show. Moving to social media, we first proposed a solution to detect and track misinformation in social media. Using Zika as a case study, we developed a tool for tracking misinformation on Twitter. We collected 13 million tweets regarding the Zika outbreak and tracked rumors outlined by the World Health Organization and the Snopes fact-checking website. We incorporated health professionals, crowdsourcing, and machine learning to capture health-related rumors as well as clarification communications. In this way, we illustrated insights that the proposed tools provide into potentially harmful information on social media, allowing public health researchers and practitioners to respond with targeted and timely action. From identifying rumor-bearing tweets, we examined individuals on social media who are posting questionable health-related information, in particular those promoting cancer treatments that have been shown to be ineffective. Specifically, we studied 4,212 Twitter users who have posted about one of 139 ineffective ``treatments'' and compared them to a baseline of users generally interested in cancer. Considering features that capture user attributes, writing style, and sentiment, we built a classifier that is able to identify users prone to propagating such misinformation. This classifier achieved an accuracy of over 90%, providing a potential tool for public health officials to identify such individuals for preventive intervention

    Parental Style, Dating Violence and Gender

    Get PDF
    The relationship between parenting styles and teen dating violence has become a relevant research topic in recent years, especially related to violence inflicted online. To more fully understand this relationship, the objective of the present study was to examine which parenting style (authoritarian, indulgent, authoritative, or neglectful) best protects against dating violence in adolescent relationships. A total of 1132 adolescents of both sexes participated in this study (46.4% boys and 53.6% girls), with ages between 14 and 18 years old (M = 15.6, SD = 1.3). A multivariate factorial design was applied (MANOVA, 4 2), using the parenting style, the parents’ gender, and the adolescents’ gender as independent variables, and the dating violence dimensions (online and o ine) as dependent variables. As the results show, the lowest scores on all the dating violence dimensions examined were obtained by adolescents from indulgent families. In addition, three interaction e ects were observed between the mother’s parenting style and the adolescent’s gender on online violence (e-violence and control), and the father’s parenting style on o ine violence (verbal-emotional). Thus, adolescents with authoritarian mothers obtained the highest scores on violence and control inflicted online, respectively, and adolescent girls with authoritarian fathers obtained the highest scores on verbal-emotional violence. These findings suggest that the indulgent style is the parenting style that protects against violence in teen dating relationships, and they also highlight the risks of the authoritarian style as a family child-rearing mode

    Study on the Influencing Factors of Health Information Sharing Behavior of the Elderly under the Background of Normalization of Pandemic Situation

    Get PDF
    This study aims to solve the problem of unwise judgment, decisions, and correspondingly dangerous behaviors caused by error health information to the elderly. Based on the MOA model and self-determination theory, this paper constructs a health information sharing model for the elderly and analyzes it with Amos\u27s structural equation model. The study finds that media richness, health information literacy, perceived benefits, and negative emotions of the coronavirus epidemic positively influence health information sharing behavior. In contrast, perceived risks have a significant negative impact on health information sharing behavior. At the same time, media richness positively affects health information literacy, perceived benefits, and negative emotions of the coronavirus epidemic but has no significant impact on perceived risks. Health literacy positively affects perceived benefits but does not significantly affect the perceived risks and negative emotions of the coronavirus epidemic. This study aims to assist government and online social platforms in taking relevant measures under the background of normalization of the pandemic situation, controlling the spread of error health information among the elderly, and guiding the elderly to share health information better

    Rumor Detection on Social Media: Datasets, Methods and Opportunities

    Full text link
    Social media platforms have been used for information and news gathering, and they are very valuable in many applications. However, they also lead to the spreading of rumors and fake news. Many efforts have been taken to detect and debunk rumors on social media by analyzing their content and social context using machine learning techniques. This paper gives an overview of the recent studies in the rumor detection field. It provides a comprehensive list of datasets used for rumor detection, and reviews the important studies based on what types of information they exploit and the approaches they take. And more importantly, we also present several new directions for future research.Comment: 10 page

    Who Are More Active and Influential on Twitter?:An Investigation of the Ukraine’s Conflict Episode

    Get PDF
    Twitter is an emerging form of news media with a wide spectrum of participants involving in news dissemination. Owing to their open and interactive nature, individuals, non-media, and non-commercial participants may play a greater role on this platform; thus, it is deemed to disrupt conventional media structures and introduce new ways of information flow. While this may be true in certain aspects in news dissemination such as allowing a broader range of participants, the authors' analysis of the involvement and influence of the different participant types, based on a large tweets dataset collected during the Ukraine's conflict event (2013-2014), portrays a different picture. Specifically, the results unveil that while non-commercial participants were the most “involved” in generating tweets about the news event, the retweets they attracted, a common measure of influence, were among the lowest. In contrast, mass media and sources related to journalists, professional associations and commercial organizations garnered the highest retweets

    Images Connect Us Together: Navigating a COVID-19 Local Outbreak in China Through Social Media Images

    Full text link
    Social media images, curated or casual, have become a crucial component of communicating situational information and emotions during health crises. Despite its prevalence and significance in informational dissemination and emotional connection, there lacks a comprehensive understanding of visual crisis communication in the aftermath of a pandemic which is characterized by uncertain local situations and emotional fatigue. To fill this gap, this work collected 345,423 crisis-related posts and 65,376 original images during the Xi'an COVID-19 local outbreak in China, and adopted a mixed-methods approach to understanding themes, goals, and strategies of crisis imagery. Image clustering captured the diversity of visual themes during the outbreak, such as text images embedding authoritative guidelines and ``visual diaries'' recording and sharing the quarantine life. Through text classification of the post that visuals were situated in, we found that different visual themes highly correlated with the informational and emotional goals of the post text, such as adopting text images to convey the latest policies and sharing food images to express anxiety. We further unpacked nuanced strategies of crisis image use through inductive coding, such as signifying authority and triggering empathy. We discuss the opportunities and challenges of crisis imagery and provide design implications to facilitate effective visual crisis communication.Comment: 32 pages, 11 figures. Accepted for publication in Proceedings of the ACM on Human-Computer Interaction (CSCW 2024
    • …
    corecore