5 research outputs found

    Predicting and Preventing Breakoff in Web Surveys

    Full text link
    Due to recent general shifts from mail to the web in survey data collection modes, respondents who break off prior to completing web surveys have become a more prevalent problem in data collection. Given the already lower response rates in web surveys as compared to more traditional modes, such as face-to-face interviewing, it is crucial to keep as many diverse respondents in a web survey as possible. This action will help prevent breakoff bias, and thus maintain high data quality and produce more accurate survey estimates. To prevent and reduce web survey breakoffs, Chapter 4 of this dissertation aimed to understand the breakoff process and its associated variables. The typical breakoff respondent: tended to be female; was non-white; was a student; waited for email reminders to start the questionnaire; and answered on a mobile device. Respondents who had broken off the questionnaire in previous waves were more likely to quit the questionnaire again very early on. Based on the findings from Chapter 4, predictions were then made about breakoff timing at the page-level in the second paper. In addition to well-known factors associated with breakoff, such as using a mobile device, Chapter 5 examined the relationships of previous response behaviors like speeding and item nonresponse with breakoff timing. This allowed for predictions about the risk of quitting for each respondent at the page-level using Cox survival models. Male respondents tended to quit at the beginning of the questionnaire, while female respondents had a higher risk of quitting toward the end of the questionnaire. There was no significant difference in breakoff risks between mobile respondents and non-mobile respondents at the beginning of the questionnaire. This quickly changed with every page completed by mobile respondents. Item nonresponse and extensive scrolling behavior were both positively associated with the risk of breaking off. Short response times and response time changes (speeding up and slowing down) both increased the risk of quitting the questionnaire. Finally, in a real-time experiment implemented for Chapter 6, interventions were conducted with respondents who had a high predicted probability of breaking off from a web survey. For this approach, a prediction model was implemented in the next wave of a panel study, and this model evaluated the risk of breaking off on every page for each respondent in addition to comparing the estimated risk with an established threshold. If the estimated risk exceeded the threshold, then the respondents saw a motivational pop-up message reminding them of their commitment to completing the questionnaire. Females, students, Blacks, and respondents on mobile devices reacted positively when assigned to the treatment group and showed less undesirable response behavior than respondents in the control group. The dissertation concludes with recommendations for practice and suggested directions for future work in this area.PHDSurvey MethodologyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149963/1/fmitter_1.pd

    Can conversational interviewing improve survey response quality without increasing interviewer effects?

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/141370/1/rssa12255_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/141370/2/rssa12255.pd

    Nonresponse and measurement error variance among interviewers in standardized and conversational interviewing

    Full text link
    Recent methodological studies have attempted to decompose the interviewer variance introduced in interviewer-administered surveys into its potential sources, using the Total Survey Error framework. These studies have informed the literature on interviewer effects by acknowledging interviewers’ dual roles as recruiters and data collectors, thus examining the relative contributions of nonresponse error variance and measurement error variance among interviewers to total interviewer variance. However, this breakdown may depend on the interviewing technique: some techniques emphasize behaviors designed to reduce variation in the answers collected by interviewers more so than other techniques. The question of whether the contributions of these error sources to total interviewer variance change for different interviewing techniques remains unanswered. Addressing this gap in knowledge has important implications for interviewing practice because the technique used could alter the relative contributions of variance in these error sources to total interviewer variance. This article presents results from an experimental study mounted in Germany that was designed to answer this question about two specific interviewing techniques. A national sample of employed individuals was first selected from a database of official administrative records, then randomly assigned to interviewers who themselves were randomized to conduct either conversational interviewing (CI) or standardized interviewing (SI), and finally measured face-to-face on a variety of cognitively challenging survey questions with official values also available for verifying the accuracy of responses. We find that although nonresponse error variance does exist among interviewers for selected measures (especially respondent age in the CI group), measurement error variance tends to be the more important source of total interviewer variance, regardless of whether interviewers are using CI or SI

    Can conversational interviewing improve survey response quality without increasing interviewer effects?

    Full text link
    Several studies have shown that conversational interviewing (CI) reduces response bias for complex survey questions relative to standardized interviewing. However, no studies have addressed concerns about whether CI increases intra-interviewer correlations (IICs) in the responses collected, which could negatively impact the overall quality of survey estimates. The paper reports the results of an experimental investigation addressing this question in a national face-to-face survey. We find that CI improves response quality, as in previous studies, without substantially or frequently increasing IICs. Furthermore, any slight increases in the IICs do not offset the reduced bias in survey estimates engendered by CI

    Interviewer-respondent interactions in conversational and standardized interviewing

    Full text link
    Standardized interviewing (SI) and conversational interviewing are two approaches to collect survey data that differ in how interviewers address respondent confusion. This article examines interviewer–respondent interactions that occur during these two techniques, focusing on requests for and provisions of clarification. The data derive from an experimental study in Germany, where the face-to-face interviews were audio-recorded. A sample of 111 interviews was coded in detail. We find that conversational interviewers do make use of the ability to clarify respondent confusion. Although the technique improved response accuracy in the main study compared to SI, conversational interviewers seem to provide clarifications even when there is no evidence of respondent confusion. This may lengthen administration time and potentially increase data collection costs. Conversational interviewers also employ neutral probes, which are generally associated with standardized interviews, at an unexpectedly high rate. We conclude with suggestions for practice and directions for future research
    corecore