33 research outputs found

    Prior Choice for the Variance Parameter in the Multilevel Regression and Poststratification Approach for Highly Selective Data: A Monte Carlo Simulation Study

    Get PDF
    The multilevel and poststratification approach is commonly used to draw valid inference from (non-probabilistic) surveys. This Bayesian approach includes varying regression coefficients for which prior distributions of their variance parameter must be specified. The choice of the distribution is far from being trivial and many contradicting recommendations exist in the literature. The prior choice may be even more challenging when data results from a highly selective inclusion mechanism, such as applied by volunteer panels. We conduct a Monte Carlo simulation study to evaluate the effect of different distribution choices on bias in the estimation of a proportion based on a sample that is subject to a highly selective inclusion mechanism.Die Multilevel Regression and Poststratifikationsmethode (MrP) wird hĂ€ufig verwendet, um SchĂ€tzungen, die auf (nicht-probabilistischen) Befragungen basieren, zu verbessern. FĂŒr dieses Bayesianische Verfahren mĂŒssen Verteilungen fĂŒr Varianzparameter geeignet festgelegt werden, wofĂŒr in der Literatur keine einheitliche Empfehlungen bestehen. Insbesondere fĂŒr Befragungen mit hoch-selektiver Teilnahme stellt die Wahl der Verteilung eine große Herausforderung dar. Im Rahmen dieser Studie wurde eine Monte Carlo Simulation durchgefĂŒhrt, um den Effekt verschiedener Verteilungen auf den (Monte Carlo) Bias der SchĂ€tzung basierend auf Stichproben mit hochselektivem Inklusionsmechanismus zu evaluieren

    What about the Less IT Literate? A Comparison of Different Postal Recruitment Strategies to an Online Panel of the General Population

    Get PDF
    Even though the proportion of individuals who are not equipped to participate in online surveys is constantly decreasing, many surveys face an under-representation of individuals who do not feel IT literate enough to participate. Using experimental data from a probability-based online panel, we study which recruitment survey mode strategy performs best in recruiting less IT-literate persons for an online panel. The sampled individuals received postal invitations to conduct the recruitment survey in a self-completion mode. We experimentally vary four recruitment survey mode strategies: one online mode strategy, two sequential mixed-mode strategies, and one concurrent mode strategy. We find the recruitment survey mode strategies to have a major effect on the sample composition of the recruitment survey, but the differences between the strategies vanish once respondents are asked to proceed with the panel online

    Sample Size Calculation For Complex Sampling Designs (Version 1.0)

    Get PDF
    Before conducting a survey, researchers frequently ask themselves how large the resulting sample of respondents needs to be to answer their research questions. In this guideline, we discuss how sample size calculation is affected by the sampling design. We give practical advice on how to conduct sample size calculation for complex samples.Bevor eine Umfrage durchgefĂŒhrt wird, stellen sich Forscher hĂ€ufig die Frage, wie groß die Stichprobe der Befragten sein muss, um ihre Forschungsfragen zu beantworten. In diesem Leitfaden wird erörtert, wie die Berechnung des Stichprobenumfangs durch das Stichprobendesign beeinflusst wird. Wir geben praktische RatschlĂ€ge, wie der Stichprobenumfang fĂŒr komplexe Stichproben berechnet werden kann

    Assessing survey data quality making use of administrative data

    Get PDF

    Recruiting a Probability-Based Online Panel via Postal Mail: Experimental Evidence

    Get PDF
    Once recruited, probability-based online panels have proven to enable high-quality and high-frequency data collection. In ever faster-paced societies and, recently, in times of pandemic lockdowns, such online survey infrastructures are invaluable to social research. In absence of email sampling frames, one way of recruiting such a panel is via postal mail. However, few studies have examined how to best approach and then transition sample members from the initial postal mail contact to the online panel registration. To fill this gap, we implemented a large-scale experiment in the recruitment of the 2018 sample of the German Internet Panel (GIP) varying panel recruitment designs in four experimental conditions: online-only, concurrent mode, online-first, and paper-first. Our results show that the online-only design delivers higher online panel registration rates than the other recruitment designs. In addition, all experimental conditions led to similarly representative samples on key socio-demographic characteristics

    How does switching a Probability-Based Online Panel to a Smartphone-Optimized Design Affect

    Get PDF
    In recent years, an increasing number of online panel participants respond to surveys on smartphones. As a result, survey practitioners are faced with a difficult decision: Either they hold the questionnaire design constant over time and thus stay with the original desktop-optimized design; or they switch to a smartphone-optimized format and thus accommodate respondents who prefer participating on their smartphone. Even though this decision is all but trivial, little research thus far has been conducted on the effect of such an adjustment on panel members’ survey participation and device use. We report on the switch to a smartphone-optimized design in the German Internet Panel (GIP), an ongoing probability-based online panel that started in 2012 with a desktop-optimized design. We investigate whether the introduction of a smartphone-optimized design affected overall response rates and smartphone use in the GIP. Moreover, we examine the effect of different ways of announcing the introduction of the smartphone-optimized design in the invitation email on survey participation using a smartphone

    Towards Risk Modeling for Collaborative AI

    Full text link
    Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4.0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.Comment: 4 pages, 2 figure

    Fieldwork Monitoring in Practice: Insights from 17 Large-scale Social Science Surveys in Germany

    Get PDF
    This study provides a synopsis of the current fieldwork monitoring practices of large-scale surveys in Germany. Based on the results of a standardized questionnaire, the study summarizes fieldwork monitoring indicators used and fieldwork measures carried out by 17 large-scale social sciences surveys in Germany. Our descriptive results reveal that a common set of fieldwork indicators and measures exist on which the studied surveys rely. However, it also uncovers the need for additional design-specific indicators. Finally, it underlines the importance of a close cooperation between survey representatives and fieldwork agencies to optimize processes in fieldwork monitoring in the German survey context. The article concludes with implications for fieldwork practice

    Empirical Standards for Software Engineering Research

    Full text link
    Empirical Standards are natural-language models of a scientific community's expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated empirical standards for research methods commonly used in software engineering. These living documents, which should be continuously revised to reflect evolving consensus around research best practices, will improve research quality and make peer review more effective, reliable, transparent and fair.Comment: For the complete standards, supplements and other resources, see https://github.com/acmsigsoft/EmpiricalStandard
    corecore