16 research outputs found

    Answers to Health Questions: Internet Search Results Versus Online Health Community Responses

    Get PDF
    Background: About 6 million people search for health information on the Internet each day in the United States. Both patients and caregivers search for information about prescribed courses of treatments, unanswered questions after a visit to their providers, or diet and exercise regimens. Past literature has indicated potential challenges around quality in health information available on the Internet. However, diverse information exists on the Internet—ranging from government-initiated webpages to personal blog pages. Yet we do not fully understand the strengths and weaknesses of different types of information available on the Internet. Objective: The objective of this research was to investigate the strengths and challenges of various types of health information available online and to suggest what information sources best fit various question types. Methods: We collected questions posted to and the responses they received from an online diabetes community and classified them according to Rothwell’s classification of question types (fact, policy, or value questions). We selected 60 questions (20 each of fact, policy, and value) and the replies the questions received from the community. We then searched for responses to the same questions using a search engine and recorded the Results: Community responses answered more questions than did search results overall. Search results were most effective in answering value questions and least effective in answering policy questions. Community responses answered questions across question types at an equivalent rate, but most answered policy questions and the least answered fact questions. Value questions were most answered by community responses, but some of these answers provided by the community were incorrect. Fact question search results were the most clinically valid. Conclusions: The Internet is a prevalent source of health information for people. The information quality people encounter online can have a large impact on them. We present what kinds of questions people ask online and the advantages and disadvantages of various information sources in getting answers to those questions. This study contributes to addressing people’s online health information needs

    Answers to Health Questions: Internet Search Results Versus Online Health Community Responses

    Get PDF
    Background: About 6 million people search for health information on the Internet each day in the United States. Both patients and caregivers search for information about prescribed courses of treatments, unanswered questions after a visit to their providers, or diet and exercise regimens. Past literature has indicated potential challenges around quality in health information available on the Internet. However, diverse information exists on the Internet—ranging from government-initiated webpages to personal blog pages. Yet we do not fully understand the strengths and weaknesses of different types of information available on the Internet. Objective: The objective of this research was to investigate the strengths and challenges of various types of health information available online and to suggest what information sources best fit various question types. Methods: We collected questions posted to and the responses they received from an online diabetes community and classified them according to Rothwell’s classification of question types (fact, policy, or value questions). We selected 60 questions (20 each of fact, policy, and value) and the replies the questions received from the community. We then searched for responses to the same questions using a search engine and recorded the Results: Community responses answered more questions than did search results overall. Search results were most effective in answering value questions and least effective in answering policy questions. Community responses answered questions across question types at an equivalent rate, but most answered policy questions and the least answered fact questions. Value questions were most answered by community responses, but some of these answers provided by the community were incorrect. Fact question search results were the most clinically valid. Conclusions: The Internet is a prevalent source of health information for people. The information quality people encounter online can have a large impact on them. We present what kinds of questions people ask online and the advantages and disadvantages of various information sources in getting answers to those questions. This study contributes to addressing people’s online health information needs

    It’s the Methodology For Me: A Systematic Review of Early Approaches to Studying TikTok

    Get PDF
    Research on TikTok has grown along with the app’s rapidly rising global popularity. In this systematic review, we investigate 58 articles examining TikTok, its users, and its content. Focusing on articles published in journals and proceedings across the domains of human-computer interaction, communication, and other related disciplines, we analyze the methods being used to study TikTok, as well as ethical considerations. Based on our analysis, we found that research on TikTok tends to use content analysis as their primary method and mainly focus on user behavior and culture, effects of use, the platform’s policies and governance, and very few articles discuss the ethical implications of collecting and analyzing such data. Additionally, most studies employ traditional forms of data collection when the affordances of TikTok tend to differ from other social media platforms. We conclude with a discussion about possible future directions and contribute to ongoing conversations about ethics and social media data

    Safe from “harm”: The Governance of Violence by Platforms

    Get PDF
    A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people\u27s perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms\u27 narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice-focused approach, in addressing platform harm

    Reporting during the COVID-19 eras: Media attention and news framing through a large-scale computational analysis

    Get PDF
    The present study examined framing that emerged in global newspaper coverage of the COVID-19 vaccine through a large-scale computational qualitative analysis of five critical time periods. The study revealed an increase in the concentration of media attention occurring as the vaccine was developed and distributed. Frames of action and consequence, as well as attribution of responsibility, pro-science, tracking and documenting, and issues relating to efficacy and safety surrounding preventative actions and public health solutions emerged

    Credibility in Online Health Communities: Effects of Moderator Credentials and Endorsement Cues

    No full text
    Online health communities (OHCs) are a common and highly frequented health resource. To create safer resources online, we must know how users think of credibility in these spaces. To understand how new visitors may use cues present within the OHC to establish source credibility, we conducted an online experiment (n = 373) manipulating cues for perceptions of two primary dimensions of credibility—trustworthiness and expertise—by manipulating the presence of endorsement cues (i.e., likes) and of moderators’ health credentials (i.e., medical professional) using a fake OHC. Participants were predominantly male (60.4%) and Caucasian (74.1%). Our findings showed that moderators with health credentials had an effect on both dimensions of source credibility in OHCs, however, likes did not. We also observed a correlation between the perceived social support within the community and both dimensions of source credibility, underscoring the value of supportive online health communities. Our findings can help developers ascertain areas of focus within their communities and users with how perceptions of credibility could help or hinder their own assessments of OHC credibility

    SAFE FROM “HARM”: THE GOVERNANCE OF VIOLENCE BY PLATFORMS

    Get PDF
    Platforms have long been under fire for how they create and enforce policies around hate speech, harmful content, and violence. In this study, we examine how three major platforms (Facebook, Twitter, and YouTube) conceptualize and implement policies around how they moderate “harm,” “violence,” and “danger” on their platforms. Through a feminist discourse analysis of public facing policy documents from official blogs and help pages, we found that platforms are often narrowly defining harm and violence in ways that perpetuate ideological hegemony around what violence is, how it manifests, and who it affects. Through this governance, they continue to control normative notions of harm and violence, denying their culpability, and effectively manage perceptions of their actions and directing users’ understanding of what is “harmful” versus what is not. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm

    A qualitative study of user perceptions of mobile health apps

    No full text
    Abstract Background Mobile apps for health exist in large numbers today, but oftentimes, consumers do not continue to use them after a brief period of initial usage, are averse toward using them at all, or are unaware that such apps even exist. The purpose of our study was to examine and qualitatively determine the design and content elements of health apps that facilitate or impede usage from the users’ perceptive. Methods In 2014, six focus groups and five individual interviews were conducted in the Midwest region of the U.S. with a mixture of 44 smartphone owners of various social economic status. The participants were asked about their general and health specific mobile app usage. They were then shown specific features of exemplar health apps and prompted to discuss their perceptions. The focus groups and interviews were audio recorded, transcribed verbatim, and coded using the software NVivo. Results Inductive thematic analysis was adopted to analyze the data and nine themes were identified: 1) barriers to adoption of health apps, 2) barriers to continued use of health apps, 3) motivators, 4) information and personalized guidance, 5) tracking for awareness and progress, 6) credibility, 7) goal setting, 8) reminders, and 9) sharing personal information. The themes were mapped to theories for interpretation of the results. Conclusions This qualitative research with a diverse pool of participants extended previous research on challenges and opportunities of health apps. The findings provide researchers, app designers, and health care providers insights on how to develop and evaluate health apps from the users’ perspective

    Additional file 1: of A qualitative study of user perceptions of mobile health apps

    No full text
    Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist. (DOCX 19 kb
    corecore