46 research outputs found

    Examining Interviewers’ Ratings of Respondents’ Health: Does Location in the Survey Matter for Interviewers’ Evaluations of Respondents?

    Get PDF
    Interviewers’ ratings of survey respondents’ health (IRH) are a promising measure of health to include in surveys as a complementary measure to self-rated health. However, our understanding of the factors contributing to IRH remains incomplete. This is the first study to examine whether and how it matters when in the interview interviewers evaluate respondents’ health in a face-to-face survey, in an experiment embedded in the UK Innovation Panel Study. We find that interviewers are more likely to rate the respondent’s health as “excellent” when IRH is rated at the end of the interview compared to the beginning. Drawing from the continuum model of impression formation, we examined whether associations between IRH and relevant covariates vary depending on placement in interview. We find that across several characteristics of interviewers and respondents, only the number of interviews completed by interviewers varies by IRH assessment location in its effect on IRH. We also find evidence that interviewer variance is lower when IRH is assessed prior to compared to after the interview. Finally, the location of IRH assessment does not impact the concurrent or predictive validity of IRH. Overall, the results suggest that in a general population study with some health questions, there may be benefits to having interviewers rate respondents’ health at the beginning of the interview (rather than at the end as in prior research) in terms of lower interviewer variance, particularly in the absence of interviewer training that mitigates the impact of within-study experience on IRH assessments

    The Effects of Features of Survey Measurement on Self-Rated Health: Response Option Order and Scale Orientation

    Get PDF
    Self-rated health (SRH) is widely used to study health across a range of disciplines. However, relatively little research examines how features of its measurement in surveys influence respondents’ answers and the overall quality of the resulting measurement. Manipulations of response option order and scale orientation are particularly relevant to assess for SRH given the increasing prominence of web-based survey data collection and since these factors are often outside of the control of the researcher who is analyzing data collected by other investigators. We examine how the interplay of two features of SRH influence respondents’ answers in a 2-by-3 factorial experiment that varies (1) the order in which the response options are presented (“excellent” to “poor” or “poor” to “excellent”) and (2) the orientation of the response option scale (vertical, horizontal, or banked). The experiment was conducted online using workers from Amazon Mechanical Turk (N = 2945). We find no main effects of response scale orientation and no interaction between response option order and scale orientation. However, we find main effects of response option order: mean SRH and the proportion in “excellent” or “very good” health are higher (better) and the proportion in “fair” or “poor” health lower when the response options are ordered from “excellent” to “poor” compared to “poor” to “excellent.” We also see heterogeneous treatment effects of response option ordering across respondents’ characteristics associated with ability. Overall, the implications for the validity and cross-survey comparability of SRH are likely considerable for response option ordering and minimal for scale orientation

    Interviewers’ Ratings of Respondents’ Health: Predictors and Association With Mortality

    Get PDF
    Objectives Recent research indicates that survey interviewers’ ratings of respondents’ health (IRH) may provide supplementary health information about respondents in surveys of older adults. Although IRH is a potentially promising measure of health to include in surveys, our understanding of the factors contributing to IRH remains incomplete. Methods We use data from the 2011 face-to-face wave of the Wisconsin Longitudinal Study, a longitudinal study of older adults from the Wisconsin high school class of 1957 and their selected siblings. We first examine whether a range of factors predict IRH: respondents’ characteristics that interviewers learn about and observe as respondents answer survey questions, interviewers’ evaluations of some of what they observe, and interviewers’ characteristics. We then examine the role of IRH, respondents’ self-rated health (SRH), and associated factors in predicting mortality over a 3-year follow-up. Results As in prior studies, we find that IRH is associated with respondents’ characteristics. In addition, this study is the first to document how IRH is associated with both interviewers’ evaluations of respondents and interviewers’ characteristics. Furthermore, we find that the association between IRH and the strong criterion of mortality remains after controlling for respondents’ characteristics and interviewers’ evaluations of respondents. Discussion We propose that researchers incorporate IRH in surveys of older adults as a cost-effective, easily implemented, and supplementary measure of health

    Response Times as an Indicator of Data Quality: Associations with Interviewer, Respondent, and Question Characteristics in a Health Survey of Diverse Respondents

    Get PDF
    Survey research remains one of the most important ways that researchers learn about key features of populations. Data obtained in the survey interview are a collaborative achievement accomplished through the interplay of the interviewer, respondent, and survey instrument, yet our field is still in the process of comprehensively documenting and examining whether, when, and how characteristics of interviewers, respondents, and questions combine to influence the quality of the data obtained. Researchers tend to consider longer response times as indicators of potential problems as they indicate longer processing or interaction from the respondent, the interviewer (where applicable), or both. Previous work demonstrates response times are associated with various characteristics of interviewers (where applicable), respondents, and questions across web, telephone, and face-to-face interviews. However, these studies vary in the characteristics considered, limited by the characteristics available in the study at hand. In addition, features of the survey interview situation have differential impact on responses from respondents in different racial, ethnic, or other socially defined cultural groups, potentially increasing systematic error and compromising researchers’ ability to make group comparisons. As examples, certain question characteristics or interviewer characteristics may have differential effects across respondents from different racial or ethnic groups (Johnson, Shavitt, and Holbrook 2011; Warnecke et al., 1997). The purpose of the current study is to add to the corpus of existing work to examine how response times are associated with characteristics of interviewers, respondents, and questions, focusing on racially diverse respondents answering questions about trust in medical researchers, participation in medical research, and their health participation. Data are provided by the 2013-2014 “Voices Heard” survey, a computer-assisted telephone survey designed to measure respondents’ perceptions of barriers and facilitators to participating in medical research. Interviews (n=410) were conducted with a quota sample of respondents nearly equally distributed into across the following subgroups: white, black, Latino, and American Indian

    Chapter 18: Response Times as an Indicator of Data Quality: Associations with Question, Interviewer, and Respondent Characteristics in a Health Survey of Diverse Respondents. Appendix 18

    Get PDF
    Appendix 18A Description of individual question characteristics and hypotheses for their relationship with RTs Appendix 18B Description of established tools for evaluating questions and hypotheses for their relationship with RTs Appendix 18C Sample Description Table 18.C1. Number of completed interviews by respondents’ race/ethnicity and sample Appendix 18D Additional Tables Appendix 18E Reference

    General Interviewing Techniques: Developing Evidence-Based Practices

    Get PDF
    This poster is a hands-on demonstration of the in-progress General Interviewer Techniques (GIT) materials described by Schaeffer, Dykema, Coombs, Schultz, Holland, and Hudson. Participants will be able to view and listen to the lesson materials, delivered via an online interface, and talk to the GIT developers

    Towards a Reconsideration of the Use of Agree-Disagree Questions in Measuring Subjective Evaluations

    Get PDF
    Agree-disagree (AD) or Likert questions (e.g., “I am extremely satisfied: strongly agree … strongly disagree”) are among the most frequently used response formats to measure attitudes and opinions in the social and medical sciences. This review and research synthesis focuses on the measurement properties and potential limitations of AD questions. The research leads us to advocate for an alternative questioning strategy in which items are written to directly ask about their underlying response dimensions using response categories tailored to match the response dimension, which we refer to as item-specific (IS) (e.g., “How satisfied are you: not at all … extremely”). In this review we: 1) synthesize past research comparing data quality for AD and IS questions; 2) present conceptual models of and review research supporting respondents’ cognitive processing of AD and IS questions; and 3) provide an overview of question characteristics that frequently differ between AD and IS questions and may affect respondents’ cognitive processing and data quality. Although experimental studies directly comparing AD and IS questions yield some mixed results, more studies find IS questions are associated with desirable data quality outcomes (e.g., validity and reliability) and AD questions are associated with undesirable outcomes (e.g., acquiescence, response effects, etc.). Based on available research, models of cognitive processing, and a review of question characteristics, we recommended IS questions over AD questions for most purposes. For researchers considering the use of previously administered AD questions and instruments, issues surrounding the challenges of translating questions from AD to IS response formats are discussed

    The Impact of Parenthetical Phrases on Interviewers’ and Respondents’ Processing of Survey Questions

    Get PDF
    Many surveys contain sets of questions (e.g., batteries), in which the same phrase, such as a reference period or a set of response categories, applies across the set. When formatting questions for interviewer administration, question writers often enclose these repeated phrases in parentheses to signal that interviewers have the option of reading the phrase. Little research, however, examines what impact this practice has on data quality. We explore whether the presence and use of parenthetical statements is associated with indicators of processing problems for both interviewers and respondents, including the interviewer’s ability to read the question exactly as worded, and the respondent’s ability to answer the question without displaying problems answering (e.g., expressing uncertainty). Data are from questions about physical and mental health from 355 digitally recorded, transcribed, and interaction-coded telephone interviews. We implement a mixed-effects model with crossed random effects and nested and crossed fixed effects. The models also control for some respondent and interviewer characteristics. Findings indicate respondents are less likely to exhibit a problem when parentheticals are read, but reading the parentheticals increase the odds(marginally significant) that interviewers will make a reading error

    General Interviewer Techniques: Developing Evidence-Based Practices for Standardized Interviewing

    Get PDF
    The practices of standardized interviewing developed at many research sites over many years. The version of standardization that Fowler and Mangione codified in Standardized Survey Interviewing has provided researchers a core resource to use in training and supervising standardized interviewers. In recent decades, however, the accumulation of recordings and transcripts of interviews makes it possible to re-visit the practices of standardization to describe both how respondents actually answer survey questions and how interviewers actually respond. To update General Interviewer Training (GIT), we brought observations of interaction during interviews together with research about conversational practices from conversation analysis, psychology, and other sources. Using our analysis of the question-answer sequence, we identified the principal actions covered in training as reading a survey question, recognizing a codable answer, acknowledging a codable answer, and follow-up for an uncodable answer. Our analysis of each of these actions is influenced by our observations of the participants’ behavior – interviewers must be trained how to repair the reading of the question, for example -- and by how that behavior is influenced by characteristics of survey questions – follow-up differs for yes-no and selection questions. We developed a set of criteria to use in evaluating the likely impact of the choices we recommend on, for example, interviewer variance and the motivation of the respondent. Although research is not available for all (or even most) criteria, we attempted to be systematic in assessing the likely costs and benefits of our decisions. We focus on standardized interviewing, which attempts to train interviewers in behaviors that all interviewers can perform in the same way. However, the evidence supplied by studies of interviewer-respondent interaction makes clear that the impact of the question on the respondent’s answer, and the way that respondents answer questions must be taken into account in any style of interviewing
    corecore