115 research outputs found

    Mixed-Device and Mobile Web Surveys (Version 1.0)

    Get PDF
    For many years, web surveys have already been the most frequently used survey mode in Germany and elsewhere (ADM, 2018; ESOMAR, 2018). Moreover, respondents increasingly use mobile devices, especially smartphones (or less often tablets), to access the Internet and participate in surveys. Because of those new developments within the Internet usage landscape, this contribution expands an earlier Survey Guideline on web surveys (Bandilla, 2015) by addressing methodological advantages and disadvantages of mixed-device as well as mobile web surveys. Moreover, it provides best practice advice on the implementation of such surveys in the areas of sampling, questionnaire design, paradata collection, and software solutions.Seit vielen Jahren sind Online-Umfragen der populärste Umfragemodus im In- und Ausland (ADM, 2018; ESOMAR, 2018). Zunehmend benutzen Befragte mobile Endgeräte, insbesondere Smartphones (seltener Tablets), um auf das Internet zuzugreifen und an Befragungen teilzunehmen. Aufgrund dieser neuen Entwicklungen im Nutzungsverhalten erweitert dieser Beitrag eine frühere Guideline für Web-Umfragen (Bandilla, 2015), indem er sich mit den methodischen Vor- und Nachteilen von Mixed-Device-Befragungen und Umfragen auf mobilen Endgeräten befasst. Darüber hinaus behandelt er bewährte Verfahrensweisen zur Durchführung solcher Umfragen in den Bereichen Stichprobenziehung, Fragebogendesign, Paradatenerfassung und Softwarelösungen

    The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items

    Get PDF
    Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents’ attitudes toward attention checks and their self-reported compliance. In Study 2, we experimentally varied the reasons given to respondents for conducting the attention check. Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given

    What Can We Learn From Open Questions in Surveys? A Case Study on Non-Voting Reported in the 2013 German Longitudinal Election Study

    Get PDF
    Open survey questions are often used to evaluate closed questions. However, they can fulfil this function only if there is a strong link between answers to open questions and answers to related closed questions. Using reasons for non-voting reported in the German Longitudinal Election Study 2013, we investigated this link by examining whether the reported reasons for non-voting may be substantive reasons or ex-post legitimations. We tested five theoretically derived hypotheses about respondents who gave, or did not give, a specific reason. Results showed that (a) answers to open questions were indeed related to answers to closed questions and could be used in explanatory turnout models to predict voting behavior, and (b) the relationship between answers to open and closed questions and the predictive power of reasons given in response to the open questions were stronger in the post-election survey (reported behavior) than in the pre-election survey (intended behavior)

    The Role of Public Opinion Research in the Democratic Process: Insights from Politicians, Journalists, and the General Public

    Get PDF
    This study reveals the existence of a paradox in how the public views polling within the democratic process. Specifically, even though the public believes that it can influence poli­cymaking, it considers public opinion polls not as useful as other, less representative forms of public input, such as comments at town hall meetings. Analyzing data from multiple surveys conducted in the United States of America, we find no evidence for the demo­cratic representation hypothesis with respect to polling. Comparisons across stakeholders (public, journalists, and politicians) demonstrate that general perceptions of inputs into the democratic process are similar, which confirms the citizen-elite congruence hypothesis. However, unlike members of the public, experts are more likely to believe that public opin­ion polls are the optimal method by which the public can successfully inform policymak­ing, a finding consistent with the legitimization hypothesis. With respect to perceptions of politicians, we found substantial differences regarding party registration with Democrats and Independents favoring public opinion polling and Republicans preferring alternative methods (e.g., town hall meetings) of informing policymakers

    Motivated Misreporting in Smartphone Surveys

    Full text link
    Filter questions are used to administer follow-up questions to eligible respondents while allowing respondents who are not eligible to skip those questions. Filter questions can be asked in either the interleafed or the grouped formats. In the interleafed format, the follow-ups are asked immediately after the filter question; in the grouped format, follow-ups are asked after the filter question block. Underreporting can occur in the interleafed format due to respondents’ desire to reduce the burden of the survey. This phenomenon is called motivated misreporting. Because smartphone surveys are more burdensome than web surveys completed on a computer or laptop, due to the smaller screen size, longer page loading times, and more distraction, we expect that motivated misreporting is more pronounced on smartphones. Furthermore, we expect that misreporting occurs not only in the filter questions themselves but also extends to data quality in the follow-up questions. We randomly assigned 3,517 respondents of a German online access panel to either the PC or the smartphone. Our results show that while both PC and smartphone respondents trigger fewer filter questions in the interleafed format than the grouped format, we did not find differences between PC and smartphone respondents regarding the number of triggered filter questions. However, smartphone respondents provide lower data quality in the follow-up questions, especially in the grouped format. We conclude with recommendations for web survey designers who intend to incorporate smartphone respondents in their surveys

    A General Interviewer Training Curriculum for Computer-Assisted Personal Interviews (GIT-CAPI) (Version 1.0)

    Get PDF
    Interviewer training is essential to ensure high-quality data in interviewer-administered surveys. Basically, interviewer training can be divided into general interviewer training which provides interviewers with fundamental knowledge about their role in the data collection process as well as succinct practical advice and project-specific interviewer training which provides additional project-specific qualifications. This survey guideline consists of two parts (I) the introductory and explanatory text and (II) the General Interviewer Training for Computer-Assisted Personal Interviews (GIT-CAPI) Curriculum. The GIT-CAPI aims at offering guidance on how to design, structure, and implement general interview training for Computer-Assisted Personal Interviews (CAPI). It includes seven training modules addressing the following topics: (1) procedural view on surveys, (2) quality perspective on surveys, (3) gaining respondents’ cooperation, (4) survey administration and survey instruments, (5) interviewing techniques and fieldwork, (6) professional standards and ethics, data protection and privacy, and (7) a technical tutorial. The GIT-CAPI is written primarily for survey research institutes and large survey projects, but they are also aimed at individual researchers and university research projects to provide them with information on relevant basic interviewer qualifications and allow them to incorporate some modules of the GIT-CAPI into their own interviewer training program. This GIT-CAPI will be revised regularly

    A guideline on how to recruit respondents for online surveys using Facebook and Instagram: Using hard-to-reach health workers as an example (Version 1.0)

    Get PDF
    Social Networking Sites (SNS) offer survey scientists a relatively new tool to recruit participants, especially among otherwise hard-to-reach populations. Facebook and Instagram, in particular, allow the distribution of advertisements to specific subsets of their users at low cost. Researchers can use such targeted advertisements to guide participants to their online questionnaires. In recent years, an increasing number of studies have shown that this approach can be successfully applied to a range of different target groups. However, a certain familiarity with the tools and mechanisms provided by Meta is necessary to employ this sampling method. Therefore, in this guideline, we will first give a general introduction to sampling via advertisements on Facebook and Instagram before providing detailed instructions on the implementation of such a recruitment campaign. This will be followed by a brief summary of a recent study conducted by GESIS using Meta's platforms to recruit professionals in the German health care sector. Finally, we provide recommendations with respect to the reporting of methodological parameters when using this approach, propose a flowchart to visualize sample sizes at different points during the recruitment process and offer a glossary containing definitions of essential terms researchers are confronted with when using Meta's advertisement interface.Soziale Netzwerkseiten (SNS) stellen eine vergleichsweise neue Möglichkeit dar, Teilnehmende für wissenschaftliche Befragungsprojekte zu rekrutieren. Besonderes Potential hat der Ansatz mit Blick auf die Rekrutierung anderweitig schwer erreichbarer Zielgruppen. So ermöglichen Facebook und Instagram die Schaltung kostengünstiger Werbung für genau definierte Teilgruppen der Nutzenden dieser Netzwerke. Entsprechende Werbunganzeigen können von Forschenden genutzt werden, um ausgewählte Personen zu ihren Onlinefragebögen zu leiten. In den letzten Jahren hat eine wachsende Zahl von Studien gezeigt, dass dieser Ansatz erfolgreich auf eine Reihe unterschiedlicher Zielgruppen angewendet werden kann. Allerdings ist eine gewisse Vertrautheit mit den von Meta bereitgestellten Instrumenten und Mechanismen erforderlich, um dieses Verfahren anzuwenden. Daher bieten wir in dieser Guideline zunächst eine allgemeine Einführung in das Sampling über Werbeanzeigen auf Facebook und Instagram, bevor wir eine detaillierte Anleitung für die Durchführung einer solchen Rekrutierungskampagne geben. Danach folgt eine kurze Zusammenfassung einer kürzlich von GESIS durchgeführten Studie, bei der die Meta-Plattformen zur Rekrutierung von Fachkräften aus dem deutschen Gesundheitswesen genutzt wurden. Abschließend geben wir Empfehlungen dazu, welche Parameter in Publikationen berichtet werden sollten, um die Vergleichbarkeit der Ergebnisse zu gewährleisten. Mit diesem Ziel stellen wir auch ein Flussdiagramm zur Visualisierung der Stichprobengrößen zu verschiedenen Zeitpunkten des Rekrutierungsprozesses zur Verfügung. Schließlich fassen wir in einem Glossar Definitionen wesentlicher Parameter zusammen, mit denen Forscher bei der Verwendung von Metas Werbeschnittstelle konfrontiert werden

    River Sampling - a Fishing Expedition: A Non-Probability Case Study

    Get PDF
    The ease with which large amounts of data can be collected via the Internet has led to a renewed interest in the use of non-probability samples. To that end, this paper performs a case study, comparing two non-probability datasets - one based on a river-sampling ap­proach, one drawn from an online-access panel - to a reference probability sample. Of particular interest is the single-question river-sampling approach, as the data collected for this study presents an attempt to field a multi-item scale with such a sampling method. Each dataset consists of the same psychometric measures for two of the Big-5 personality traits, which are expected to perform independently of sample composition. To assess the similarity of the three datasets we compare their correlation matrices, apply linear and non-linear dimension reduction techniques, and analyze the distance between the datasets. Our results show that there are important limitations when implementing a multi-item scale via a single-question river sample. We find that, while the correlation between our data sets is similar, the samples are composed of persons with different personality traits

    Interviewer Training Guidelines of Multinational Survey Programs: A Total Survey Error Perspective

    Get PDF
    Typically, interviewer training is implemented in order to minimize interviewer effects and ensure that interviewers are well prepared to administer the survey. Leading professional associations in the survey research landscape recommend the standardized implementation of interviewer training. Some large-scale multinational survey programs have produced their own training guidelines to ensure a comparable level of quality in the implementation of training across participating countries. However, the length, content, and methodology of interviewer training guidelines are very heterogeneous. In this paper, we provide a comparative overview of general and study-specific interviewer training guidelines of three multinational survey programs (ESS, PIAAC, SHARE). Using total survey error (TSE) as a conceptual framework, we map the general and study-specific training guidelines of the three multinational survey programs to components of the TSE to determine how they target the reduction of interviewer effects. Our results reveal that unit nonresponse error is covered by all guidelines; measurement error is covered by most guidelines; and coverage error, sampling error, and processing error are addressed either not at all or sparsely. We conclude, for example, that these guidelines could be an excellent starting point for new - small as well as large-scale - surveys to design their interviewer training, and that interviewer training guidelines should be made publicly available in order to provide a high level of transparency, thus enabling survey programs to learn from each other

    How German health workers’ views on vaccine safety can be swayed by the AstraZeneca controversy

    Get PDF
    Several COVID-19 vaccines are now licensed, and the success of a rollout often depends on people’s willingness to accept any of them. Health workers are in a unique position to influence the public. Jan Priebe (German Institute for Global and Area Studies), Henning Silber, Christoph Beuthner, Steffen Pötzschke, Bernd Weiß, and Jessica Daikeler (GESIS – Leibniz Institute for the Social Sciences) show how their recommendations change when they are given different types of information about vaccines
    corecore