176 research outputs found

    Interviewer Effects on Nonresponse

    Get PDF
    In face-to-face surveys interviewers play a crucial role in making contact with and gaining cooperation from sample units. While some analyses investigate the influence of interviewers on nonresponse, they are typically restricted to single-country studies. However, interviewer training, contacting and cooperation strategies as well as survey climates may differ across countries. Combining call-record data from the European Social Survey (ESS) with data from a detailed interviewer questionnaire on attitudes and doorstep behavior we find systematic country differences in nonresponse processes, which can in part be explained by differences in interviewer characteristics, such as contacting strategies and avowed doorstep behavior.

    Measuring Interviewer Characteristics Pertinent to Social Surveys: A Conceptual Framework

    Get PDF
    Interviewer effects are found across all types of interviewer-mediated surveys crossing disciplines and countries. While studies describing interviewer effects are manifold, identifying characteristics explaining these effects has proven difficult due to a lack of data on the interviewers. This paper proposes a conceptual framework of interviewer characteristics for explaining interviewer effects and its operationalization in an interviewer questionnaire. The framework encompasses four dimensions of interviewer characteristics: interviewer attitudes, interviewers’ own behaviour, interviewers’ experience with measurements, and interviewers’ expectations. Our analyses of the data collected from interviewers working on the fourth wave of SHARE Germany show that the above measures distinguish well between interviewers

    Zum gesellschaftlichen Umgang mit der Corona-Pandemie : Ergebnisse der Mannheimer Corona-Studie

    Get PDF

    Modeling group-specific interviewer effects on survey participation using separate coding for random slopes in multilevel models

    Get PDF
    Despite its importance in terms of survey participation, the literature is sparse on how face-to-face interviewers differentially affect specific groups of sample units. In this paper, we demonstrate how an alternative parametrization of the random components in multilevel models, so-called separate coding, delivers valuable insights into differential interviewer effects for specific groups of sample members. At the example of a face-to-face recruitment interview for a probability-based online panel, we detect small interviewer effects regarding survey participation for non-Internet households, whereas we find sizable interviewer effects for Internet households. Based on the proposed variance decomposition, we derive practical guidance for survey practitioners to address such differential interviewer effects

    Modelling Group-Specific Interviewer Effects on Nonresponse Using Separate Coding for Random Slopes in Multilevel Models

    Get PDF
    To enhance response among underrepresented groups and hence, to increase response rates and to decrease potential nonresponse bias survey practitioners often use interviewers in population surveys (Heerwegh, 2009). While interviewers tend to increase overall response rates in surveys (see Heerwegh, 2009), research on the determinants of nonresponse have also identified human interviewers as one reason for variations in response rates (see for examples Couper & Groves, 1992; Durrant, Groves, Staetsky, & Steele, 2010; Durrant & Steele, 2009; Hox & de Leeuw, 2002; Loosveldt & Beullens, 2014; West & Blom, 2016). In addition, research on interviewer effects indicates that interviewers introduce nonresponse bias, if interviewers systematically differ in their success in obtaining response from specific respondent groups (see West, Kreuter, & Jaenichen, 2013; West & Olson, 2010). Therefore, interviewers might be a source of selective nonresponse in surveys. Interviewers might also differentially contribute to selective nonresponse in surveys and hence, potential nonresponse bias, when interviewer effects are correlated with characteristics of the approached sample units (for an example see Loosveldt & Beullens, 2014). Multilevel models including dummies in the random part of the model to distinguish between respondent groups are commonly used to investigate whether interviewer effects on nonresponse differ across specific respondent groups (see Loosveldt & Beullens, 2014). When dummy coding, which is also referred to as contrast coding (Jones, 2013), are included as random components in multilevel models for interviewers effects, the obtained variance estimates indicate to what extent the contrast between respondent groups varies across interviewers. Yet, such parameterization does not directly yield insight on the size of interviewer effects for specific respondent groups. Surveys with large imbalances among respondent groups gain from an investigation of the variation of interviewer effect sizes on nonresponse, as one gains insights on whether the interviewer effect size is the same for specific respondent groups. The importance of the interviewer effect size for specific groups of respondents lies in its prediction of the effectiveness of interviewer-related fieldwork strategies (for examples on liking, matching, or prioritizing respondents with interviewers see Durrant et al., 2010; Peytchev, Riley, Rosen, Murphy, & Lindblad, 2010; Pickery & Loosveldt, 2002, 2004) and thus, a effective mitigation of potential nonresponse bias. Consequently, understanding group-specific interviewer effect sizes can aide the efficiency of respondent recruitment, because we then understand why some interviewer-related fieldwork strategies have great impact on some respondent group’s participation while other strategies have little effect. To obtain information on differences in interviewer effect size, we propose to use an alternative coding strategy, so-called separate coding in multilevel models with random slopes (for examples see Jones, 2013; Verbeke & Molenberghs, 2000, ch. 12.1). In case of separate coding, every variable represents a direct estimate of the interviewer effects for specific respondent groups (rather than the contrast with a reference category). Investigating nonresponse during the recruitment of a probability-based online panel separately for persons with and without prior internet access (data used from the German Internet Panel, see Blom et al., 2017), we detect that the size of the interviewer effect differs between the two respondent groups. While we discover no interviewer effects on nonresponse for persons without internet access (offliners), we find sizable interviewer effects for persons with internet access (onliners). In addition, we identify interviewer characteristics that explain this group-specific nonresponse. Our results demonstrate that the implementation of interviewer-related fieldwork strategies might help to increase response rates among onliners, as for onliners the interviewer effect size was relatively large compared to the interviewer effect size for offliners

    Response quality in nonprobability and probability-based online panels

    Get PDF
    Recent years have seen a growing number of studies investigating the accuracy of nonprobability online panels; however, response quality in nonprobability online panels has not yet received much attention. To fill this gap, we investigate response quality in a comprehensive study of seven nonprobability online panels and three probability-based online panels with identical fieldwork periods and questionnaires in Germany. Three response quality indicators typically associated with survey satisficing are assessed: straight-lining in grid questions, item nonresponse, and midpoint selection in visual design experiments. Our results show that there is significantly more straight-lining in the nonprobability online panels than in the probability-based online panels. However, contrary to our expectations, there is no generalizable difference between nonprobability online panels and probability-based online panels with respect to item nonresponse. Finally, neither respondents in nonprobability online panels nor respondents in probability-based online panels are significantly affected by the visual design of the midpoint of the answer scale

    Data collection quality assurance in cross-national surveys: the example of the ESS

    Full text link
    "The significance of cross-national surveys for the social sciences has increased over the past decades and with it the number of cross-national datasets that researchers have access to. Cross-national surveys are typically large enterprises that demand dedicated efforts to coordinate the process of data collection in the participating countries. While cross-national surveys have addressed many important methodological problems, such as translation and the cultural applicability of concepts, the management of the data collection process has yet had little place in cross-national survey methodology. This paper describes the quality standards for data collection and their monitoring in the European Social Survey (ESS). In the ESS data are collected via face-to-face interviewing. In each country a different survey organisation carries out the data collection. Assuring the quality across the large number of survey organisations is a complex but indispensable task to achieve valid and comparable data." (author's abstract)"International vergleichende Umfragen haben in den vergangenen Jahrzehnten zunehmende Bedeutung in den Sozialwissenschaften erlangt. Diese Umfragen sind für gewöhnlich große Unterfangen, die gezielte Anstrengungen zur Koordinierung der Datenerhebung in den teilnehmenden Ländern erfordern. Probleme des Managements der Datenerhebung bei international vergleichenden Umfragen haben bislang jedoch nur wenig Aufmerksamkeit gefunden, im Unterschied etwa zu anderen methodischen Herausforderungen wie Fragen der Übersetzung oder der interkulturellen Übertragbarkeit von theoretischen Konzepten. Der vorliegende Beitrag beschreibt die Qualitätsstandards für die Datenerhebung und deren Überwachung im European Social Survey (ESS). Im ESS werden Daten in persönlich-mündlichen Interviews erhoben; in jedem Teilnehmerland ist ein anderes Umfrageinstitut mit der Feldarbeit betraut. Um valide und vergleichbare Daten zu erzielen, sind Maßnahmen zur Sicherung der Qualität der Datenerhebung über die große Zahl von Umfrageinstituten hinweg unverzichtbar." (Autorenreferat

    Measurement equivalence in probability and nonprobability online panels

    Get PDF
    Nonprobability online panels are commonly used in the social sciences as a fast and inexpensive way of collecting data in contrast to more expensive probability-based panels. Given their ubiquitous use in social science research, a great deal of research is being undertaken to assess the properties of nonprobability panels relative to probability ones. Much of this research focuses on selection bias, however, there is considerably less research assessing the comparability (or equivalence) of measurements collected from respondents in nonprobability and probability panels. This article contributes to addressing this research gap by testing whether measurement equivalence holds between multiple probability and nonprobability online panels in Australia and Germany. Using equivalence testing in the Confirmatory Factor Analysis framework, we assessed measurement equivalence in six multi-item scales (three in each country). We found significant measurement differences between probability and nonprobability panels and within them, even after weighting by demographic variables. These results suggest that combining or comparing multi-item scale data from different sources should be done with caution. We conclude with a discussion of the possible causes of these findings, their implications for survey research, and some guidance for data users.publishedVersio

    Recruiting a Probability-Based Online Panel via Postal Mail: Experimental Evidence

    Get PDF
    Once recruited, probability-based online panels have proven to enable high-quality and high-frequency data collection. In ever faster-paced societies and, recently, in times of pandemic lockdowns, such online survey infrastructures are invaluable to social research. In absence of email sampling frames, one way of recruiting such a panel is via postal mail. However, few studies have examined how to best approach and then transition sample members from the initial postal mail contact to the online panel registration. To fill this gap, we implemented a large-scale experiment in the recruitment of the 2018 sample of the German Internet Panel (GIP) varying panel recruitment designs in four experimental conditions: online-only, concurrent mode, online-first, and paper-first. Our results show that the online-only design delivers higher online panel registration rates than the other recruitment designs. In addition, all experimental conditions led to similarly representative samples on key socio-demographic characteristics
    • …
    corecore