48 research outputs found

    The Past, Present, and Future of Research on Interviewer Effects

    Get PDF
    Interviewer-administered surveys are a primary method of collecting information from populations across the United States and the world. Various types of interviewer-administered surveys exist, including large-scale government surveys that monitor populations (e.g., the Current Population Survey), surveys used by the academic community to understand what people think and do (e.g., the General Social Survey), and surveys designed to gauge public opinion at a particular time point (e.g., the Gallup Daily Tracking Poll). Interviewers participate in these data collection efforts in a multitude of ways, including creating lists of housing units for sampling, persuading sampled units to participate, and administering survey questions (Morton-Williams 1993). In an increasing number of surveys, interviewers are also tasked with collecting blood, saliva, and other biomeasures, and asking survey respondents for consent to link survey data to administrative records (Sakshaug 2013). Interviewers are also used in mixed mode surveys to recruit and interview non respondents after less expensive modes like mail and web have failed (e.g., the American Community Survey and the Agricultural Resource Management Survey; de Leeuw 2005; Dillman, Smyth and Christian 2014; Olson et al. 2019). In completing these varied tasks, interviewers affect survey costs and coverage, nonresponse, measurement, and processing errors (Schaeffer, Dykema and Maynard 2010; West and Blom 2017)

    Current Knowledge and Considerations Regarding Survey Refusals: Executive Summary of the AAPOR Task Force Report on Survey Refusals

    Get PDF
    The landscape of survey research has arguably changed more significantly in the past decade than at any other time in its relatively brief history. In that short time, landline telephone ownership has dropped from some 98 percent of all households to less than 60 percent; cell-phone interviewing went from a novelty to a mainstay; address-based designs quickly became an accepted method of sampling the general population; and surveys via Internet panels became ubiquitous in many sectors of social and market research, even as they continue to raise concerns given their lack of random selection. Among these widespread changes, it is perhaps not surprising that the substantial increase in refusal rates has received comparatively little attention. As we will detail, it was not uncommon for a study conducted 20 years ago to have encountered one refusal for every one or two completed interviews, while today experiencing three or more refusals for every one completed interview is commonplace. This trend has led to several concerns that motivate this Task Force. As refusal rates have increased, refusal bias (as a component of nonresponse bias) is an increased threat to the validity of survey results. Of practical concern are the efficacy and cost implications of enhanced efforts to avert initial refusals and convert refusals that do occur. Finally, though no less significant, are the ethical concerns raised by the possibility that efforts to minimize refusals can be perceived as coercive or harassing potential respondents. Indeed, perhaps the most important goal of this document is to foster greater consideration by the reader of the rights of respondents in survey research

    Chapter 17: Exploring the Antecedents and Consequences of Interviewer Reading Speed (IRS) at the Question Level. Appendix 17

    Get PDF
    Figure A17.A.1: Manipulation of Question Characteristics (Example Questions Shown) Figure A17.A.2: Response Latency Validity Options Provided to Interviewers after Each Question where Response Latencies were Measured Figure A17.A.3: Interviewer Behavior Codes Used to Identify Question Latency Problems Appendix 17.B: Measurement of Response and Question Latencies Table A17.B.1: Validity of Response Latency MeasurementTable A17.B.2: Validity of Question Latency MeasurementReferencesAppendix 17.C: Questions in CAPI Survey for which Response Latencies were Measure

    Antecedents and Consequences of Interviewer Pace: Assessing Interviewer Speaking Pace at the Question Level

    Get PDF
    The pace at which interviewers read survey questions may vary considerably across interviewers (e.g., Cannell, Miller, & Oksenberg, 1981) and as a function of interviewer experience (Olson and Petchev, 2007). The pace at which interviews are conducted can influence respondent perceptions of the importance of interaction (Fowler, 1966). Interviewer training typically includes instructions to read questions slowly and clearly to respondents is based on the assumption that doing so maximizes data quality (e.g., Fowler and Mangione, 1990). In this research, we examine possible causes and consequences of interviewer pace using data from in person surveys conducted with respondents from four racial and ethnic groups: non-Hispanic White, non-Hispanic Black, Mexican-American, and Korean-American respondents. All respondents were interviewed by respondents of the same race. Using HLM models, we examine the extent to which question characteristics (e.g., length, sensitivity, etc.) influence interviewer pace and the extent to which pace is associated with interviewer (e.g., not reading the question completely) and respondent (e.g., giving a response that does not meet the question objective) behaviors believed to be associated with lowered survey data quality. We discuss implications of our findings for standardized interviewer training
    corecore