41 research outputs found

    The Effects of Features of Survey Measurement on Self-Rated Health: Response Option Order and Scale Orientation

    Get PDF
    Self-rated health (SRH) is widely used to study health across a range of disciplines. However, relatively little research examines how features of its measurement in surveys influence respondents’ answers and the overall quality of the resulting measurement. Manipulations of response option order and scale orientation are particularly relevant to assess for SRH given the increasing prominence of web-based survey data collection and since these factors are often outside of the control of the researcher who is analyzing data collected by other investigators. We examine how the interplay of two features of SRH influence respondents’ answers in a 2-by-3 factorial experiment that varies (1) the order in which the response options are presented (“excellent” to “poor” or “poor” to “excellent”) and (2) the orientation of the response option scale (vertical, horizontal, or banked). The experiment was conducted online using workers from Amazon Mechanical Turk (N = 2945). We find no main effects of response scale orientation and no interaction between response option order and scale orientation. However, we find main effects of response option order: mean SRH and the proportion in “excellent” or “very good” health are higher (better) and the proportion in “fair” or “poor” health lower when the response options are ordered from “excellent” to “poor” compared to “poor” to “excellent.” We also see heterogeneous treatment effects of response option ordering across respondents’ characteristics associated with ability. Overall, the implications for the validity and cross-survey comparability of SRH are likely considerable for response option ordering and minimal for scale orientation

    Examining Interviewers’ Ratings of Respondents’ Health: Does Location in the Survey Matter for Interviewers’ Evaluations of Respondents?

    Get PDF
    Interviewers’ ratings of survey respondents’ health (IRH) are a promising measure of health to include in surveys as a complementary measure to self-rated health. However, our understanding of the factors contributing to IRH remains incomplete. This is the first study to examine whether and how it matters when in the interview interviewers evaluate respondents’ health in a face-to-face survey, in an experiment embedded in the UK Innovation Panel Study. We find that interviewers are more likely to rate the respondent’s health as “excellent” when IRH is rated at the end of the interview compared to the beginning. Drawing from the continuum model of impression formation, we examined whether associations between IRH and relevant covariates vary depending on placement in interview. We find that across several characteristics of interviewers and respondents, only the number of interviews completed by interviewers varies by IRH assessment location in its effect on IRH. We also find evidence that interviewer variance is lower when IRH is assessed prior to compared to after the interview. Finally, the location of IRH assessment does not impact the concurrent or predictive validity of IRH. Overall, the results suggest that in a general population study with some health questions, there may be benefits to having interviewers rate respondents’ health at the beginning of the interview (rather than at the end as in prior research) in terms of lower interviewer variance, particularly in the absence of interviewer training that mitigates the impact of within-study experience on IRH assessments

    Interviewers’ Ratings of Respondents’ Health: Predictors and Association With Mortality

    Get PDF
    Objectives Recent research indicates that survey interviewers’ ratings of respondents’ health (IRH) may provide supplementary health information about respondents in surveys of older adults. Although IRH is a potentially promising measure of health to include in surveys, our understanding of the factors contributing to IRH remains incomplete. Methods We use data from the 2011 face-to-face wave of the Wisconsin Longitudinal Study, a longitudinal study of older adults from the Wisconsin high school class of 1957 and their selected siblings. We first examine whether a range of factors predict IRH: respondents’ characteristics that interviewers learn about and observe as respondents answer survey questions, interviewers’ evaluations of some of what they observe, and interviewers’ characteristics. We then examine the role of IRH, respondents’ self-rated health (SRH), and associated factors in predicting mortality over a 3-year follow-up. Results As in prior studies, we find that IRH is associated with respondents’ characteristics. In addition, this study is the first to document how IRH is associated with both interviewers’ evaluations of respondents and interviewers’ characteristics. Furthermore, we find that the association between IRH and the strong criterion of mortality remains after controlling for respondents’ characteristics and interviewers’ evaluations of respondents. Discussion We propose that researchers incorporate IRH in surveys of older adults as a cost-effective, easily implemented, and supplementary measure of health

    Chapter 18: Response Times as an Indicator of Data Quality: Associations with Question, Interviewer, and Respondent Characteristics in a Health Survey of Diverse Respondents. Appendix 18

    Get PDF
    Appendix 18A Description of individual question characteristics and hypotheses for their relationship with RTs Appendix 18B Description of established tools for evaluating questions and hypotheses for their relationship with RTs Appendix 18C Sample Description Table 18.C1. Number of completed interviews by respondents’ race/ethnicity and sample Appendix 18D Additional Tables Appendix 18E Reference

    Response Times as an Indicator of Data Quality: Associations with Interviewer, Respondent, and Question Characteristics in a Health Survey of Diverse Respondents

    Get PDF
    Survey research remains one of the most important ways that researchers learn about key features of populations. Data obtained in the survey interview are a collaborative achievement accomplished through the interplay of the interviewer, respondent, and survey instrument, yet our field is still in the process of comprehensively documenting and examining whether, when, and how characteristics of interviewers, respondents, and questions combine to influence the quality of the data obtained. Researchers tend to consider longer response times as indicators of potential problems as they indicate longer processing or interaction from the respondent, the interviewer (where applicable), or both. Previous work demonstrates response times are associated with various characteristics of interviewers (where applicable), respondents, and questions across web, telephone, and face-to-face interviews. However, these studies vary in the characteristics considered, limited by the characteristics available in the study at hand. In addition, features of the survey interview situation have differential impact on responses from respondents in different racial, ethnic, or other socially defined cultural groups, potentially increasing systematic error and compromising researchers’ ability to make group comparisons. As examples, certain question characteristics or interviewer characteristics may have differential effects across respondents from different racial or ethnic groups (Johnson, Shavitt, and Holbrook 2011; Warnecke et al., 1997). The purpose of the current study is to add to the corpus of existing work to examine how response times are associated with characteristics of interviewers, respondents, and questions, focusing on racially diverse respondents answering questions about trust in medical researchers, participation in medical research, and their health participation. Data are provided by the 2013-2014 “Voices Heard” survey, a computer-assisted telephone survey designed to measure respondents’ perceptions of barriers and facilitators to participating in medical research. Interviews (n=410) were conducted with a quota sample of respondents nearly equally distributed into across the following subgroups: white, black, Latino, and American Indian

    Measuring Trust in Medical Researchers: Comparing Agree-Disagree and Construct-Specific Survey Questions

    Get PDF
    While scales measuring subjective constructs historically rely on agree-disagree (AD) questions, recent research demonstrates that construct-specific (CS) questions clarify underlying response dimensions that AD questions leave implicit and CS questions often yield higher measures of data quality. Given acknowledged issues with AD questions and certain established advantages of CS items, the evidence for the superiority of CS questions is more mixed than one might expect. We build on previous investigations by using cognitive interviewing to deepen understanding of AD and CS response processing and potential sources of measurement error. We randomized 64 participants to receive an AD or CS version of a scale measuring trust in medical researchers. We examine several indicators of data quality and cognitive response processing including: reliability, concurrent validity, recency, response latencies, and indicators of response processing difficulties (e.g., uncodable answers). Overall, results indicate reliability is higher for the AD scale, neither scale is more valid, and the CS scale is more susceptible to recency effects for certain questions. Results for response latencies and behavioral indicators provide evidence that the CS questions promote deeper processing. Qualitative analysis reveals five sources of difficulties with response processing that shed light on under-examined reasons why AD and CS questions can produce different results, with CS not always yielding higher measures of data quality than AD

    General Interviewing Techniques: Developing Evidence-Based Practices

    Get PDF
    This poster is a hands-on demonstration of the in-progress General Interviewer Techniques (GIT) materials described by Schaeffer, Dykema, Coombs, Schultz, Holland, and Hudson. Participants will be able to view and listen to the lesson materials, delivered via an online interface, and talk to the GIT developers

    Towards a Reconsideration of the Use of Agree-Disagree Questions in Measuring Subjective Evaluations

    Get PDF
    Agree-disagree (AD) or Likert questions (e.g., “I am extremely satisfied: strongly agree … strongly disagree”) are among the most frequently used response formats to measure attitudes and opinions in the social and medical sciences. This review and research synthesis focuses on the measurement properties and potential limitations of AD questions. The research leads us to advocate for an alternative questioning strategy in which items are written to directly ask about their underlying response dimensions using response categories tailored to match the response dimension, which we refer to as item-specific (IS) (e.g., “How satisfied are you: not at all … extremely”). In this review we: 1) synthesize past research comparing data quality for AD and IS questions; 2) present conceptual models of and review research supporting respondents’ cognitive processing of AD and IS questions; and 3) provide an overview of question characteristics that frequently differ between AD and IS questions and may affect respondents’ cognitive processing and data quality. Although experimental studies directly comparing AD and IS questions yield some mixed results, more studies find IS questions are associated with desirable data quality outcomes (e.g., validity and reliability) and AD questions are associated with undesirable outcomes (e.g., acquiescence, response effects, etc.). Based on available research, models of cognitive processing, and a review of question characteristics, we recommended IS questions over AD questions for most purposes. For researchers considering the use of previously administered AD questions and instruments, issues surrounding the challenges of translating questions from AD to IS response formats are discussed

    The Impact of Parenthetical Phrases on Interviewers’ and Respondents’ Processing of Survey Questions

    Get PDF
    Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is associated with indicators of processing problems for both interviewers and respondents, including the interviewer’s ability to read the question exactly as worded, and the respondent’s ability to answer the question without displaying problems answering (e.g., expressing uncertainty). Data are from questions about physical and mental health from 355 digitally recorded, transcribed, and interaction-coded telephone interviews. We implement a mixed-effects model with crossed random effects and nested and crossed fixed effects. The models also control for some respondent and interviewer characteristics. Findings indicate respondents are less likely to exhibit a problem when parentheticals are read, but reading the parentheticals increase the odds(marginally significant) that interviewers will make a reading error
    corecore