56,146 research outputs found

    Towards Automatic Evaluation of Health-Related CQA Data

    Full text link
    The paper reports on evaluation of Russian community question answering (CQA) data in health domain. About 1,500 question-answer pairs were manually evaluated by medical professionals, in addition automatic evaluation based on reference disease-medicine pairs was performed. Although the results of the manual and automatic evaluation do not fully match, we find the method still promising and propose several improvements. Automatic processing can be used to dynamically monitor the quality of the CQA content and to compare different data sources. Moreover, the approach can be useful for symptomatic surveillance and health education campaigns.This work is partially supported by the Russian Foundation for Basic Research, project #14-07-00589 “Data Analysis and User Modelling in Narrow-Domain Social Media”. We also thank assessors who volunteered for the evaluation and Mail.Ru for granting us access to the data

    Answers to Health Questions: Internet Search Results Versus Online Health Community Responses

    Get PDF
    Background: About 6 million people search for health information on the Internet each day in the United States. Both patients and caregivers search for information about prescribed courses of treatments, unanswered questions after a visit to their providers, or diet and exercise regimens. Past literature has indicated potential challenges around quality in health information available on the Internet. However, diverse information exists on the Internet—ranging from government-initiated webpages to personal blog pages. Yet we do not fully understand the strengths and weaknesses of different types of information available on the Internet. Objective: The objective of this research was to investigate the strengths and challenges of various types of health information available online and to suggest what information sources best fit various question types. Methods: We collected questions posted to and the responses they received from an online diabetes community and classified them according to Rothwell’s classification of question types (fact, policy, or value questions). We selected 60 questions (20 each of fact, policy, and value) and the replies the questions received from the community. We then searched for responses to the same questions using a search engine and recorded the Results: Community responses answered more questions than did search results overall. Search results were most effective in answering value questions and least effective in answering policy questions. Community responses answered questions across question types at an equivalent rate, but most answered policy questions and the least answered fact questions. Value questions were most answered by community responses, but some of these answers provided by the community were incorrect. Fact question search results were the most clinically valid. Conclusions: The Internet is a prevalent source of health information for people. The information quality people encounter online can have a large impact on them. We present what kinds of questions people ask online and the advantages and disadvantages of various information sources in getting answers to those questions. This study contributes to addressing people’s online health information needs

    Still Searching: How People Use Health Care Price Information in the United States, New York State, Florida, Texas and New Hampshire

    Get PDF
    Americans bear a large and growing share of their health care costs in the form of high deductibles and insurance premiums, as well as copayments and, sometimes, coinsurance for physician office visits and hospitalizations. Historically, the health care system has not made it easy for people to find out how much their care will cost them out of pocket. But, in recent years, insurers, state governments, employers and other entities have been trying to make price information more easily available to individuals and families. Are Americans trying to find out about health care prices today? Do they want more information? What sources would they trust to deliver it?This nationally representative research finds 50 percent of Americans have tried to find health care price information before getting care, including 20 percent who have tried to compare prices across multiple providers. Representative surveys in four states— New York, Texas, Florida and New Hampshire—show higher percentages of residents in Texas, Florida and New Hampshire have tried to find price information and have compared prices than New York residents and Americans overall. This variation suggests factors at the state level might be influencing how many people try to find out about health care costs. Nationally and in those four states, more than half of people who compared prices report saving money. Most Americans overall think it is important for their state governments to provide comparative price information. But we found limited awareness that doctors' prices vary and limited awareness that hospitals' prices vary.Public Agenda conducted this research with support from the Robert Wood Johnson Foundation and the New York State Health Foundation. The findings are based on a nationally representative survey of 2,062 adults, ages 18 and older, and a set of representative surveys in four states: one survey of 802 adults in New York, one of 808 adults in Texas, one of 819 adults in Florida and one of 826 adults in New Hampshire. The surveys were conducted from July through September 2016 by telephone, including cell phones, and online

    Answers to Health Questions: Internet Search Results Versus Online Health Community Responses

    Get PDF
    Background: About 6 million people search for health information on the Internet each day in the United States. Both patients and caregivers search for information about prescribed courses of treatments, unanswered questions after a visit to their providers, or diet and exercise regimens. Past literature has indicated potential challenges around quality in health information available on the Internet. However, diverse information exists on the Internet—ranging from government-initiated webpages to personal blog pages. Yet we do not fully understand the strengths and weaknesses of different types of information available on the Internet. Objective: The objective of this research was to investigate the strengths and challenges of various types of health information available online and to suggest what information sources best fit various question types. Methods: We collected questions posted to and the responses they received from an online diabetes community and classified them according to Rothwell’s classification of question types (fact, policy, or value questions). We selected 60 questions (20 each of fact, policy, and value) and the replies the questions received from the community. We then searched for responses to the same questions using a search engine and recorded the Results: Community responses answered more questions than did search results overall. Search results were most effective in answering value questions and least effective in answering policy questions. Community responses answered questions across question types at an equivalent rate, but most answered policy questions and the least answered fact questions. Value questions were most answered by community responses, but some of these answers provided by the community were incorrect. Fact question search results were the most clinically valid. Conclusions: The Internet is a prevalent source of health information for people. The information quality people encounter online can have a large impact on them. We present what kinds of questions people ask online and the advantages and disadvantages of various information sources in getting answers to those questions. This study contributes to addressing people’s online health information needs

    The Effect of Color in Website Design: Searching for Medical Information Online

    Get PDF
    As the spread of the internet continues, more people than ever search for medical information online when in need of health advice. Color is a prominent part of website aesthetics, and has been shown in the literature to have both positive and negative psychological effects. A total of 120 participants took part in the present study where the effect of the colors grey, blue, orange and red was tested on the task of searching for medical information online. An experimental website was manipulated with different color schemes, inspired by existing popular websites with the purpose of providing medical information online. Participants then used the website to find and comprehend information about a fictitious disease. Overall, the results showed no statistically significant effect of website color on either ratings or time spent using the website. A three-way interaction between color, gender and ratings was found, where females rated the red website significantly higher than males on almost all dimensions. The results are discussed in light of similar findings of the color red and its negative effect on males. Future research involving other areas where the potential effect of color might be of benefit is suggested

    Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A Platforms

    Full text link
    Question categorization and expert retrieval methods have been crucial for information organization and accessibility in community question & answering (CQA) platforms. Research in this area, however, has dealt with only the text modality. With the increasing multimodal nature of web content, we focus on extending these methods for CQA questions accompanied by images. Specifically, we leverage the success of representation learning for text and images in the visual question answering (VQA) domain, and adapt the underlying concept and architecture for automated category classification and expert retrieval on image-based questions posted on Yahoo! Chiebukuro, the Japanese counterpart of Yahoo! Answers. To the best of our knowledge, this is the first work to tackle the multimodality challenge in CQA, and to adapt VQA models for tasks on a more ecologically valid source of visual questions. Our analysis of the differences between visual QA and community QA data drives our proposal of novel augmentations of an attention method tailored for CQA, and use of auxiliary tasks for learning better grounding features. Our final model markedly outperforms the text-only and VQA model baselines for both tasks of classification and expert retrieval on real-world multimodal CQA data.Comment: Submitted for review at CIKM 201
    corecore