39 research outputs found

    Mobile Web Surveys: a First Look at Measurement, Nonresponse, and Coverage Errors.

    Full text link
    This dissertation focuses on the use of smartphones for Web surveys. The current state of knowledge about whether respondents are willing and able to accurately record their answers when using such devices is evolving, but far from complete. The primary purpose of my research is therefore to investigate the implications of this new mode for various sources of error using a Total Survey Error (TSE) perspective. Each chapter reports on a different aspect of a mode experiment that I designed to compare the effect of completion device (smartphone vs. computer) on survey errors. The experiment was carried out using the LISS panel (Longitudinal Internet Studies for the Social Sciences), a probability-based Web panel administered by CentERdata at Tilburg University in the Netherlands. The first analysis (Chapter 2) compares response quality in the two modes. When using smartphones, respondents in this study really were more mobile and more engaged with the other people and other tasks compared to when using computers. Despite this, response quality – conscientious responding and disclosure of sensitive information – was equivalent between the two modes of data collection. The second analysis (Chapter 3) investigates the causes of nonresponse in the mobile Web version of the experiment. I found that several social, psychological, attitudinal, and behavioral measures are associated with nonresponse. These include factors known to influence participation decisions in other survey modes such as personality traits, civic engagement, and attitudes about surveys as well as factors that may be specific to this mode, including smartphone use, social media use, and smartphone e-mail use. The third analysis (Chapter 4) estimates multiple sources of error simultaneously in the mobile Web version of the experiment. Errors are estimated as a mode effect against the conventional Web survey, which serves as the benchmark. I find few overall mode effects and no evidence whatsoever of measurement effects, but a significant impact of non-coverage bias for over one-third of the estimates. Collectively, these findings suggest that non-observation errors (i.e., coverage and nonresponse), not measurement errors, are the largest obstacle to the adoption of mobile Web surveys for population-based inference.PhDSurvey MethodologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116722/1/antoun_1.pd

    Interacting with Interviewers in Voice and Text Interviews on Smartphones

    Get PDF
    As people increasingly adopt SMS text messaging for communicating in their daily lives, texting becomes a potentially important way to interact with survey respondents, who may expect that they can communicate with survey researchers as they communicate with others. Thus far our evidence from analyses of 642 iPhone interviews suggests that text interviewing can lead to higher quality data (less satisficing, more disclosure) than voice interviews on the same device, whether the questions are asked by an interviewer or an automated system. Respondents also report high satisfaction with text interviews, with many reporting that text is more convenient because they can continue with other activities while responding. But the interaction with an interviewer in a text interview is substantially different than in a voice interview, with much less of a sense of the interviewer’s social presence as well as quite different time pressure. In principle, this suggests there should be different potential for interviewer effects in text than in voice. In this paper we report analyses of how text interviews differed from voice interviews in our corpus, as well as how interviews with human interviewers differed from interviews with automated interviewing systems in both modes, based on transcripts and coding of multiple features of the interaction. Text interviews took more than twice as long as voice interviews, but the amount of time between turns (text messages) was large, and the total number of turns was two thirds as many as in voice interviews. As in the voice interviews, text interviews with human interviewers involved a small but significantly greater number of turns than text interviews with automated systems, not only because respondents engaged in small “talk” with human interviewers but because they requested clarification and help with the survey task more often than with the automated text interviewer. Respondents were more likely to type out full response options (as opposed to equally acceptable single character responses) with a human text interviewer. Analyses of the content and format of text interchanges compared to voice interchanges demonstrate both potential improvements in data quality and ease for respondents, but also pitfalls and challenges that a more asynchronous mode brings. The “anytime anywhere” qualities of text interviewing may reduce pressure to answer quickly, allowing respondents to answer more thoughtfully and to consult records even if they are mobile or multitasking. From a Total Survey Error perspective, the more streamlined nature of text interaction, which largely reduces the interview to its essential question-asking and -answering elements, may help reduce the potential for unintended interviewer influence

    Chapter 13: Interacting with interviewers in text and voice interviews on smartphones. Appendix 13

    Get PDF
    Appendix A: Example human text and voice interchange that includes clarification. Appendix B: Coding Manual Appendix A13C.1 (Data) attached belo

    Chapter 10, Google trends as a tool for public opinion research: An illustration of the perceived threats of immigration

    Get PDF
    To gather public opinion data on sensitive topics in real-time, researchers are exploring the use of Internet search data such as Google Trends (GT). First, this chapter describes the characteristics and nature of GT data, and then provides a case study that examines the salience of perceived threats related to immigration in Germany based on the share of Google search queries that include language about these threats. Last, we discuss the advantages and possible challenges of utilizing GT data in social scientific research. We used the national polling results for the German right-wing party Alternative für Deutschland (AfD)—which runs on a largely anti-immigrant platform—as a criterion measure. GT data did not consistently predict polling data in the expected direction in real-time, but it was consistently predictive of future polling trends (35–104 weeks later) at a moderate level (r = .25–.50), although the size of the correlations varied across time periods and groups of keywords. Our mixed results highlight the low reliability of GT data, but also its largely untapped potential as a leading indicator of public opinion, especially on sensitive topics such as the perceived threats of immigration

    European and multi-ancestry genome-wide association meta-analysis of atopic dermatitis highlights importance of systemic immune regulation

    Get PDF
    Atopic dermatitis (AD) is a common inflammatory skin condition and prior genome-wide association studies (GWAS) have identified 71 associated loci. In the current study we conducted the largest AD GWAS to date (discovery N = 1,086,394, replication N = 3,604,027), combining previously reported cohorts with additional available data. We identified 81 loci (29 novel) in the European-only analysis (which all replicated in a separate European analysis) and 10 additional loci in the multi-ancestry analysis (3 novel). Eight variants from the multi-ancestry analysis replicated in at least one of the populations tested (European, Latino or African), while two may be specific to individuals of Japanese ancestry. AD loci showed enrichment for DNAse I hypersensitivity and eQTL associations in blood. At each locus we prioritised candidate genes by integrating multi-omic data. The implicated genes are predominantly in immune pathways of relevance to atopic inflammation and some offer drug repurposing opportunities.</p

    Chapter 13: Interacting with interviewers in text and voice interviews on smartphones. Appendix 13

    Get PDF
    Appendix A: Example human text and voice interchange that includes clarification. Appendix B: Coding Manual Appendix A13C.1 (Data) attached belo
    corecore