19 research outputs found

    Going Online with a Face-to-Face Household Panel: Effects of a Mixed Mode Design on Item and Unit Non-Response

    Get PDF
    There are considerable cost and timeliness advantages associated with web interviewing, compared to interviewer administration. However, web surveys do not perform well in terms of coverage and participation. To harness the strengths of both modes, existing probability-based interviewer-administered surveys are therefore being pushed to consider a mixed mode approach, including web. We assess the effect of introducing web interviewing as part of a mixed-mode design in the context of an existing longitudinal survey in which sample members have previously been interviewed face-to-face. Using experimental data from a household panel survey in the UK, we find that the mixed mode design resulted in a lower proportion of households fully responding. However, more than one in five households fully responded online. Overall, individual response rates were also lower with the mixed mode design, and we were unable to identify any subgroups where the reverse was true. Also, item nonresponse rates were higher with the mixed mode design

    Willingness to use mobile technologies for data collection in a probability household panel

    Get PDF
    We asked members of the Understanding Society Innovation Panel about their willingness to participate in various data collection tasks on their mobile devices. We find that stated willingness varies considerably depending on the type of activity involved: respondents are less willing to participate in tasks that involve downloading and installing an app, or where data are collected passively. Stated willingness also varies between smartphones and tablets, and between types of respondents: respondents who report higher concerns about the security of data collected with mobile technologies and those who use their devices less intensively are less willing to participate in mobile data collection tasks

    Is that still the same? Has that Changed? On the Accuracy of Measuring Change with Dependent Interviewing

    Get PDF
    Measurement and analysis of change is one of the primary reasons to conduct panel surveys, but studies have shown that estimates of change from panel surveys can be subject to measurement error, most commonly overreporting of change. For this reason, many panel surveys use a technique called proactive dependent interviewing, which reminds respondents of their answer in the previous wave and has been shown to reduce the capturing of spurious change. However, so far very little guidance exists in the literature on how such questions should be worded. Here we use data from three experimental studies to examine question wording effects with proactive dependent interviewing. Because we link data from one of the surveys to administrative records, we can examine not only different levels of change by format, but the accuracy of the change reports as well. Our results show that how questions about current status are worded affects the reporting of change. The overall results, including comparisons with administrative records, suggest that reminding respondents of their previous answer and then asking “Is that still the case?” produces the most accurate data on change and stability experienced by respondents

    The effects of personalized feedback on participation and reporting in mobile app data collection

    Get PDF
    Offering participants in mobile app studies personalized feedback on the data they report seems an obvious thing to do: participants might expect an app to provide feedback given their experiences with commercial apps, feedback might motivate more people to participate in the study, and participants might be more motivated to provide accurate data so that the feedback is more useful to them. However, personalized feedback might lead participants to change the behaviour that is being measured with the app, implementing feedback is costly, and also constrains other design decisions for the data collection. In this paper, we report on an experimental study that tested the effects of providing personalized feedback in a one-month mobile app-based spending study. Based on the app paradata and responses to a debrief survey, it seems that participants reacted positively to the feedback. The feedback did not have the potential negative effect of altering the spending participants reported in the app. However, the feedback also did not have the intended effect of increasing initial participation or ongoing adherence to the study protocol

    The effects of placement and order on consent to data linkage in a web survey

    Get PDF
    We report on an experiment in a supplemental web survey as part of a longitudinal study in the United Kingdom where we ask survey respondents to consent to two forms of data linkage to health records and to consent to be mailed a serology kit. We varied the placement (early, early in context or late in the survey) and order (linkage first or serology first) of the consent requests. We also examine reasons for consent or non-consent. We find that order of the requests does not make much difference, but making the requests early in the survey significantly increases consent rates over asking them after a series of content-related questions (by 3.4 percentage points) or later in the survey (by 7.2 percentage points). This is consistent with previous research showing that early requests for consent in a survey have a positive effect. The main reason chosen for not consenting related to the personal nature of the information requested

    Understanding Society: Minimizing selection biases in data collection using mobile apps

    Get PDF
    The UK Household Longitudinal Study: Understanding Society has a programme of research and development that underpins innovations in data collection methods. One of our current focuses is on using mobile applications to collect additional data that supplement data collected in annual interviews. To date, we have used mobile apps to collect data on consumer expenditure, well-being, anthropometrics and cognition. In this article we review the potential barriers to data collection using mobile apps and experimental evidence collected with the Understanding Society Innovation Panel, on what can be done to reduce these barriers

    Participation in a Mobile App Survey to Collect Expenditure Data as Part of a Large-Scale Probability Household Panel: Coverage and Participation Rates and Biases

    Get PDF
    This paper examines non-response in a mobile app study designed to collect expenditure data. We invited 2,383 members of the nationally representative Understanding Society Innovation Panel in Great Britain to download an app to record their spending on goods and services: participants were asked to scan receipts or report spending directly in the app every day for a month. We examine coverage of mobile devices and participation in the app study at different stages of the process. We use data from the prior wave of the panel to examine the prevalence of potential barriers to participation, including access, ability and willingness to use different mobile technologies. We also examine bias in who has devices and in who participates, considering socio-demographic characteristics, financial position and financial behaviours. While the participation rate was low, drop out was also low: over 80% of participants remained in the study for the full month. The main barriers to participation were access to, and frequency of use of mobile devices, willingness to download an app for a survey, and general cooperativeness with the survey. We found extensive coverage bias in who has and does not have mobile devices, and some bias in who participates conditional on having a device. In the full sample, biases remain in who participates in terms of socio-demographic characteristics and financial behaviours. Crucially, however, we observe no biases for several key correlates of spending

    Increasing Participation in a Mobile App Study: The Effects of a Sequential Mixed-Mode Design and In-Interview Invitation

    Get PDF
    Mobile apps are an attractive and versatile method of collecting data in the social and behavioural sciences. In samples of the general population, however, participation in app-based data collection is still rather low. In this paper, we examine two potential ways of increasing participation and potentially reducing participation bias in app-based data collection: 1) inviting sample members to a mobile app study within an interview rather than by post, and 2) offering a browser-based follow-up to the mobile app. We use experimental data from Spending Study 2, collected on the Understanding Society Innovation Panel and on the Lightspeed UK online access panel. Sample members were invited to download a spending diary app on their smartphone or use a browser-based online diary to report all their purchases for one month. The results suggest that inviting sample members to an app study within a face-to-face interview increases participation rates but does not bring in different types of participants. In contrast, the browser-based alternative can both increase participation rates and reduce biases in who participates if offered immediately once the app had been declined. We find that the success of using mobile apps for data collection hinges on the protocols used to implement the app

    The Role of the Interviewer in Producing Mode Effects: Results from a Mixed Modes Experiment Comparing Face-to-Face, Telephone and Web Administration

    Get PDF
    The presence of an interviewer (face-to-face or via telephone) is hypothesized to motivate respondents to generate accurate answers and reduce task difficulty, but also to reduce the privacy of the reporting situation. To study this, we used respondents from an existing face-to-face probability sample of the adult general population who were randomly assigned to face-to-face, telephone and web modes of data collection. The prevalence of indicators of satisficing (e.g., non-differentiation, acquiescence, middle category choices and primacy/recency effects) and socially desirable responding were studied across modes. Results show differences between interviewer-administered modes and web in levels of satisficing (non-differentiation, and to some extent acquiescence and middle category choices) and in socially desirable responding. There was also an unexpected finding of how satisficing can differ by mode
    corecore