198 research outputs found

    Scientific Network of Experts: Interviewer Effects and Interviewer Training

    Get PDF
    Although the collection of survey data is undergoing a notable shift toward online and mixed-mode data collection methods (Baker et al., 2010; Groves, 2011), interviewers are still heavily involved in the majority of survey data collections that serve as a basis for important economic, educational, and public policy decisions. Research supports the notion that interviewer characteristics and task-specific skill levels significantly influence the resulting data quality (see, e.g. Ackermann-Piek, 2018; Billiet & Loosveldt, 1988; Dahlhamer, Cynamon, Gentleman, Piani, & Weiler, 2010; Durand, 2005; Fowler Jr., 1991; Hox & de Leeuw, 2002; Jäckle, Lynn, Sinibaldi, & Tipping, 2013; Sakshaug, Tutz, & Kreuter, 2013; Schnell & Trappman, 2006; Vannette & Krosnick, 2018; West & Blom, 2017; West, Kreuter, & Jaenichen, 2013). Thus, it is not surprising, that the international survey research community has sought opportunities to facilitate intensive exchanges between survey researchers on topics related to interviewer training procedures, fieldwork processes, and interviewer effects at international workshops and conferences (e.g. Workshop on Explaining interviewer effects in interviewer-mediated surveys, Germany: Mannheim, April 2013; Interviewer workshop, USA Nebraska: Lincoln, February 2019; Conferences of the European Survey Research Association, every other year at changing locations). However, such occurrences tend to be sporadic, one-off events, not designed to promote a continuous, ongoing dialogue of interviewer-related issues in the field. Furthermore, while most of the interviewer literature reports findings on interviewer effects and interviewer training, there is a lack of overarching recommendations and standards for reduction of interviewer effects and the implementation of appropriate interviewer training methods in interviewer-administered surveys. For example, international recommendations for interviewer training are typically quite broad and do not include a high level of detail suitable for prescriptive and standardized implementation in the field (Alcser, Clemens, Holland, Guyer, & Hu, 2016; Daikeler, Silber, Bosnjak, Zabal, & Martin, 2017; Fowler Jr. & Mangione, 1990; Lessler, Eyerman, & Wang, 2008). In the following proposal, we describe our vision to set up a multi-year scientific exchange and cooperation among a core group of international methods experts on topics related to interviewer involvement in the implementation of scientific surveys. We aim to start with three-year pilot phase to build up an infrastructure for regular knowledge exchanges and sharing of materials. Within this pilot phase, annual or biannual meetings will be organized to discuss best practices, exchange ideas, and identify research gaps on pressing topics pertinent to interviewer surveys. As the major outcome of this scientific network, we plan the production of research-based standards and recommendations reports to be published on various interviewer-related topics

    Lehre und Forschung: Thematische Schwerpunkte am Volkskundlichen Seminar (1995–2005)

    Get PDF
    Da kein Abstract des Artikels vorhanden ist, finden Sie hier den Beginn des Artikels: Das Volkskundliche Seminar der Universität Zürich ist vergleichsweise jung. Erst seit 1946 wurde ein entsprechender Lehrstuhl eingerichtet und mit Richard Weiss (1907–1962) – eigentlich studierter Germanist – besetzt. 1968 kam ein Lehrstuhl für Europäische Volksliteratur hinzu, mit dessen Führung Max Lüthi (1909–1991) betraut wurde. Die beiden Fächer Volkskunde und Europäische Volksliteratur wurden im Zuge der Bologna-Reform 2006 zum Institut für Populäre Kulturen mit einem neuen, gleichnamigen Studienfach verschmolzen

    Interviewer-Observed Paradata in Mixed-Mode and Innovative Data Collection

    Get PDF
    In this research note, we address the potentials of using interviewer-observed paradata, typically collected during face-to-face-only interviews, in mixed-mode and innovative data collection methods that involve an interviewer at some stage (e.g., during the initial contact or during the interview). To this end, we first provide a systematic overview of the types and purposes of the interviewer-observed paradata most commonly collected in face-to-face interviews—contact form data, interviewer observations, and interviewer evaluations—using the methodology of evidence mapping. Based on selected studies, we illustrate the main purposes of interviewer-observed paradata we identified—including fieldwork management, propensity modeling, nonresponse bias analysis, substantive analysis, and survey data quality assessment. Based on this, we discuss the possible use of interviewer-observed paradata in mixed-mode and innovative data collection methods. We conclude with thoughts on new types of interviewer-observed paradata and the potential of combining paradata from different survey modes

    Interviewer effects in PIAAC Germany 2012

    Full text link
    Concerns about interviewer effects in interviewer-mediated surveys have accompanied generations of survey researchers. Following the Total Survey Error (TSE) framework, this dissertation examines interviewer effects in multiple areas of a survey: interviewer effects on estimates of substantive survey variables and interviewer effects on unit nonresponse. The aim is to address whether the same interviewer characteristics are associated with interviewer effects across these multiple error sources. Researchers typically address interviewer effects on a single source of error. Part of the reason for this is that sufficient data on multiple error sources are seldom available in a single survey. Using data from PIAAC Germany 2012, in this dissertation the results of analyses into individual error sources described in the TSE can be combined

    Interviewer Effects in Standardized Surveys (Version 1.0)

    Get PDF
    Concerns about interviewer effects in interviewer-mediated surveys have accompanied survey research for a long time. As interviewers are involved in nearly all aspects of the survey implementation process, they can affect almost all types of survey errors, including sampling error, nonresponse error, measurement error, and, to a lesser extent, error resulting from the coding and editing of survey responses. Building on the existing literature, this survey guideline provides an overview of interviewer effects and their estimation. It consists of two parts: first, an introductory text using the total survey error (TSE) paradigm as a theoretical framework to provide a general overview of interviewer effects; second, a brief introduction to calculating interviewer effects using multilevel analyses.Interviewereffekte in Interviewer-administrierten Umfragen sind seit langem ein wichtiges Thema in der Umfrageforschung. Da Interviewer an fast allen Aspekten der Durchführung von Umfragen beteiligt sind, können sie auf fast alle Arten von Umfragefehlern einen Einfluss haben, einschließlich Stichprobenfehler, Antwortfehler, Messfehler und in geringerem Maße Fehler, die sich aus der Kodierung und Bearbeitung von Umfrageantworten ergeben. Aufbauend auf der vorhandenen Literatur gibt diese Survey Guideline einen Überblick zu Interviewereffekten und deren Schätzung. Die Survey Guideline besteht aus zwei Teilen: Erstens, der Einleitung, in welcher das TSE-Paradigma (Total Survey Error) als theoretischer Rahmen verwendet wird, um einen allgemeinen Überblick über die Interviewer-Effekte zu geben; zweitens, eine kurze Einführung in die Berechnung der Interviewereffekte mittels Mehrebenanalysen

    Interviewer Training Guidelines of Multinational Survey Programs: A Total Survey Error Perspective

    Get PDF
    Typically, interviewer training is implemented in order to minimize interviewer effects and ensure that interviewers are well prepared to administer the survey. Leading professional associations in the survey research landscape recommend the standardized implementation of interviewer training. Some large-scale multinational survey programs have produced their own training guidelines to ensure a comparable level of quality in the implementation of training across participating countries. However, the length, content, and methodology of interviewer training guidelines are very heterogeneous. In this paper, we provide a comparative overview of general and study-specific interviewer training guidelines of three multinational survey programs (ESS, PIAAC, SHARE). Using total survey error (TSE) as a conceptual framework, we map the general and study-specific training guidelines of the three multinational survey programs to components of the TSE to determine how they target the reduction of interviewer effects. Our results reveal that unit nonresponse error is covered by all guidelines; measurement error is covered by most guidelines; and coverage error, sampling error, and processing error are addressed either not at all or sparsely. We conclude, for example, that these guidelines could be an excellent starting point for new - small as well as large-scale - surveys to design their interviewer training, and that interviewer training guidelines should be made publicly available in order to provide a high level of transparency, thus enabling survey programs to learn from each other

    Explaining Interviewer Effects on Survey Unit Nonresponse: A Cross-Survey Analysis

    Get PDF
    In interviewer-administered surveys, interviewers are involved in nearly all steps of the survey implementation. However, besides many positive aspects of interviewers’ involvement, they are – intentionally or unintentionally – a potential source of survey errors. In recent decades, a large body of literature has accumulated about measuring and explaining interviewer effects on survey unit nonresponse. Recently, West and Blom (2017) have published a research synthesis on factors explaining interviewer effects on various sources of survey error, including survey unit nonresponse. They find that previous research reports great variability across surveys in the significance and even direction of predictors of interviewer effects on survey unit nonresponse. This variability in findings across surveys may be due to a lack of consistency in key characteristics of the surveys examined, such as the group of interviewers employed, the survey organizations managing the interviewers, the sampling frame used, and the populations, as well as time periods, observed. In addition, the explanatory variables available to the researchers who examine interviewer effects on survey nonresponse differ largely across surveys and may thus influence the results. The diversity in findings, survey characteristics, and explanatory variables available for analyses call for a more orchestrated effort in explaining interviewer effects on survey unit nonresponse. Our paper fills this gap as our analyses are based on four German surveys with a high level of consistency across the surveys: GIP 2012, PIAAC, SHARE, and GIP 2014. The four surveys were conducted face-to-face in approximately the same time period in Germany. They were administered through the same survey organization with the same pool of interviewers. In addition, we were able to use the same area control variables and identical explanatory variables at the interviewer level. Despite the numerous similarities across the four surveys, our results show high variability of interviewer characteristics that explain interviewer effects on survey unit nonresponse across the surveys. In addition, we find that the interviewers employed in the four surveys are rather similar with regard to most of their socio-demographic characteristics, work experience and working hours. Furthermore, the interviewers are similar with regard to their behavior and reporting about deviations from standardized interviewing techniques, how they achieve response and their reasons for working as an interviewer. The results, therefore, suggest that other differences – such as topic, sponsor, research team, or interviewer training – between the four surveys might explain the identified interviewer effects on survey unit nonresponse

    Allgemeines Interviewertraining für computerbasierte persönliche Befragungen - Erläuterungen zum Foliensatz

    Get PDF
    Die Ausführung ist eine Erläuterung zum Foliensatz "Allgemeines Interviewertraining für computerbasierte persönliche Befragungen" (s. https://doi.org/10.15465/gesis-sg_de_034)
    • …
    corecore