25 research outputs found

    How to Conduct Effective Interviewer Training: A Meta-Analysis

    Get PDF
    Interviewer training can improve the performance of interviewers and thus also the quality of survey data. However, the question of how effective interviewer training is for improving data quality and more importantly, which determinates drive its success, remain unanswered. This research uses meta-analytical methods to evaluate both the improvements in data quality due to interviewer training and the effectivity of training modules with respect interviewer performance. We consider various aspects of data quality, namely unit nonresponse, item nonresponse, probing behavior, administration, reading, and recording. Based on more than sixty experimental studies, we find that comprehensive interviewer training improves unit- and item nonresponse, probing behavior, administration, reading, and recording of items by up to 40%. We also find that using a broad variety of training modules such as blended learning, exercises and feedback sessions, interviewer monitoring, and supplementary training materials reinforces this the positive effect of interviewer training on data quality

    A Note on How Prior Survey Experience With Self-Administered Panel Surveys Affects Attrition in Different Modes

    Full text link
    Attrition poses an important challenge for panel surveys. With respect to these surveys, respondents’ decisions about whether to participate in reinterviews are affected by their participation in prior waves of the panel. However, in self-administered mixed-mode panels, the way of experiencing a survey differs between the mail mode and the web mode. Consequently, this study investigated how respondents' prior experience with the characteristics of a survey - such as length, difficulty, interestingness, sensitivity, and the diversity of the questionnaire - affects their informed decision about whether to participate again or not. We found that the length of a questionnaire seems to be of such importance to respondents that they base their participation on this characteristic, regardless of the mode. Our findings also suggest that the difficulty and diversity of questionnaires are readily accessible information that respondents use in the mail mode when making a decision about whether to participate again, whereas these characteristics have no effect in the web mode. In addition, privacy concerns have an impact in the web mode but not in the mail mode

    The application of evidence-based methods in survey methodology

    Full text link
    This dissertation is dedicated to the application of evidence-based methods in survey research. Although survey research is a relatively young discipline, knowledge and contradictory findings abound in this field, as in other disciplines. So this dissertation will first provide an overview of evidence-based research and its necessity in the field of survey methodology. Afterwards the application of evidence-based methods in terms of experimental research and meta-analysis are demonstrated. Herewith the four research examples given are focusing on mobile response quality, web response mode comparison, cross-cultural online response behavior and interviewer training. This dissertation closes with an outlook on the application of evidence-based methods in survey methodology

    Mixed-Device and Mobile Web Surveys (Version 1.0)

    Get PDF
    For many years, web surveys have already been the most frequently used survey mode in Germany and elsewhere (ADM, 2018; ESOMAR, 2018). Moreover, respondents increasingly use mobile devices, especially smartphones (or less often tablets), to access the Internet and participate in surveys. Because of those new developments within the Internet usage landscape, this contribution expands an earlier Survey Guideline on web surveys (Bandilla, 2015) by addressing methodological advantages and disadvantages of mixed-device as well as mobile web surveys. Moreover, it provides best practice advice on the implementation of such surveys in the areas of sampling, questionnaire design, paradata collection, and software solutions.Seit vielen Jahren sind Online-Umfragen der populärste Umfragemodus im In- und Ausland (ADM, 2018; ESOMAR, 2018). Zunehmend benutzen Befragte mobile Endgeräte, insbesondere Smartphones (seltener Tablets), um auf das Internet zuzugreifen und an Befragungen teilzunehmen. Aufgrund dieser neuen Entwicklungen im Nutzungsverhalten erweitert dieser Beitrag eine frühere Guideline für Web-Umfragen (Bandilla, 2015), indem er sich mit den methodischen Vor- und Nachteilen von Mixed-Device-Befragungen und Umfragen auf mobilen Endgeräten befasst. Darüber hinaus behandelt er bewährte Verfahrensweisen zur Durchführung solcher Umfragen in den Bereichen Stichprobenziehung, Fragebogendesign, Paradatenerfassung und Softwarelösungen

    Web Versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates

    Full text link
    Do web surveys still yield lower response rates compared with other survey modes? To answer this question, we replicated and extended a meta-analysis done in 2008 which found that, based on 45 experimental comparisons, web surveys had an 11 percentage points lower response rate compared with other survey modes. Fundamental changes in internet accessibility and use since the publication of the original meta-analysis would suggest that people’s propensity to participate in web surveys has changed considerably in the meantime. However, in our replication and extension study, which comprised 114 experimental comparisons between web and other survey modes, we found almost no change: web surveys still yielded lower response rates than other modes (a difference of 12 percentage points in response rates). Furthermore, we found that prenotifications, the sample recruitment strategy, the survey’s solicitation mode, the type of target population, the number of contact attempts, and the country in which the survey was conducted moderated the magnitude of the response rate differences. These findings have substantial implications for web survey methodology and operations

    Misreporting Among Reluctant Respondents

    Full text link
    Many surveys aim to achieve high response rates to keep bias due to nonresponse low. However, research has shown that the relationship between the nonresponse rate and nonresponse bias is small. In fact, high response rates may lead to measurement error, if respondents with low response propensities provide survey responses of low quality. In this paper, we explore the relationship between response propensity and measurement error, specifically, motivated misreporting, the tendency to give inaccurate answers to speed through an interview. Using data from four surveys conducted in several countries and modes, we analyze whether motivated misreporting is worse among those respondents who were the least likely to respond to the survey. Contrary to the prediction of our theoretical model, we find only limited evidence that reluctant respondents are more likely to misreport

    Interviewer-Observed Paradata in Mixed-Mode and Innovative Data Collection

    Get PDF
    In this research note, we address the potentials of using interviewer-observed paradata, typically collected during face-to-face-only interviews, in mixed-mode and innovative data collection methods that involve an interviewer at some stage (e.g., during the initial contact or during the interview). To this end, we first provide a systematic overview of the types and purposes of the interviewer-observed paradata most commonly collected in face-to-face interviews—contact form data, interviewer observations, and interviewer evaluations—using the methodology of evidence mapping. Based on selected studies, we illustrate the main purposes of interviewer-observed paradata we identified—including fieldwork management, propensity modeling, nonresponse bias analysis, substantive analysis, and survey data quality assessment. Based on this, we discuss the possible use of interviewer-observed paradata in mixed-mode and innovative data collection methods. We conclude with thoughts on new types of interviewer-observed paradata and the potential of combining paradata from different survey modes

    The Relationship Between Response Probabilities and Data Quality in Grid Questions

    Get PDF
    Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias

    Motivated Misreporting in Smartphone Surveys

    Full text link
    Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents’ desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys

    Survey Data Documentation

    Get PDF
    Documentation of research results is an essential process within the research lifecycle, which includes the steps of study planning and developing the survey instruments, data collection and preparation, data analysis, and data archiving. Primary researchers have to ensure that the collected data and all accompanying materials are properly documented and archived. This enables the scientific community to understand and reproduce the results of a scientific project. The purpose of this survey guideline is to provide a brief introduction and an overview about data preparation and data documentation in order to help primary researchers to make their data and other study-related materials long-term accessible. This overview will therefore help researchers to comply with the principles of reproducibility as a crucial aspect of good scientific practice. This guideline will be useful for researchers who are in the stages of planning a study as well as for those who have already collected data and would like to prepare it for archiving
    corecore