57 research outputs found

    Can we predict device use? An investigation into mobile device use in surveys

    Get PDF
    In this study, we investigate whether mobile device use in surveys can be predicted. We aim to identify possible motives for device use and build a model by drawing on theory from technology acceptance research and survey research. We then test this model with a Structural Equation Modeling approach using data of seven waves of the GESIS panel. We test whether our theoretical model fits the data by focusing on measures of fit, and by studying the standardized effects of the model. Results reveal that intention to use a particular device can predict actual use quite well. Ease of smartphone use is the most meaningful variable: if people use a smartphone for specific tasks, their intention to use a smartphone for survey completion is also more likely. In conclusion, investing in ease of use of mobile survey completion could encourage respondents to use mobile devices. This can foremost be established by building well-designed surveys for mobile devices

    Understanding Willingness to Share Smartphone-Sensor Data

    Get PDF
    The growing smartphone penetration and the integration of smartphones into people’s everyday practices offer researchers opportunities to augment survey measurement with smartphone-sensor measurement or to replace self-reports. Potential benefits include lower measurement error, a widening of research questions, collection of in situ data, and a lowered respondent burden. However, privacy considerations and other concerns may lead to nonparticipation. To date, little is known about the mechanisms of willingness to share sensor data by the general population, and no evidence is available concerning the stability of willingness. The present study focuses on survey respondents’ willingness to share data collected using smartphone sensors (GPS, camera, and wearables) in a probability-based online panel of the general population of the Netherlands. A randomized experiment varied study sponsor, framing of the request, the emphasis on control over the data collection process, and assurance of privacy and confidentiality. Respondents were asked repeatedly about their willingness to share the data collected using smartphone sensors, with varying periods before the second request. Willingness to participate in sensor-based data collection varied by the type of sensor, study sponsor, order of the request, respondent’s familiarity with the device, previous experience with participating in research involving smartphone sensors, and privacy concerns. Willingness increased when respondents were asked repeatedly and varied by sensor and task. The timing of the repeated request, one month or six months after the initial request, did not have a significant effect on willingness

    Recruiting Young and Urban Groups into a Probability-Based Online Panel by Promoting Smartphone Use

    Get PDF
    A sizable minority of all web surveys are nowadays completed on smartphones. People who choose a smartphone for Internet-related tasks are different from people who mainly use a PC or tablet. Smartphone use is particularly high among the young and urban. We have to make web surveys attractive for smartphone completion in order not to lose these groups of smartphone users. In this paper we study how to encourage people to complete surveys on smartphones in order to attract hard-to-reach subgroups of the population. We experimentally test new features of a survey-friendly design: we test two versions of an invitation letter to a survey, a new questionnaire lay-out, and autoforwarding. The goal of the experiment is to evaluate whether the new survey design attracts more smartphone users, leads to a better survey experience on smartphones and results in more respondents signing up to become a member of a probability-based online panel. Our results show that the invitation letter that emphasizes the possibility for smartphone completion does not yield a higher response rate than the control condition, nor do we find differences in the socio-demographic background of respondents. We do find that slightly more respondents choose a smartphone for survey completion. The changes in the layout of the questionnaire do lead to a change in survey experience on the smartphone. Smartphone respondents need 20% less time to complete the survey when the questionnaire includes autoforwarding. However, we do not find that respondents evaluate the survey better, nor are they more likely to become a member of the panel when asked at the end of the survey. We conclude with a discussion of autoforwarding in web surveys and methods to attract smartphone users to web surveys

    Augmenting surveys with data from sensors and apps: Opportunities and challenges

    Get PDF
    The increasing volume of “Big Data” produced by sensors and smart devices can transform the social and behavioral sciences. Several successful studies used digital data to provide new insights into social reality. This special issue argues that the true power of these data for the social sciences lies in connecting new data sources with surveys. While new digital data are rich in volume, they seldomly cover the full population nor do they provide insights into individuals’ feelings, motivations, and attitudes. Conversely, survey data, while well suited for measuring people’s internal states, are relatively poor at measuring behaviors and facts. Developing a methodology for integrating the two data sources can mitigate their respective weaknesses. Sensors and apps on smartphones are useful for collecting both survey data and digital data. For example, smartphones can track people’s travel behavior and ask questions about its motives. A general methodology on the augmentation of surveys with data from sensors and apps is currently missing. Issues of representativeness, processing, storage, data linkage, and how to combine survey data with sensor and app data to produce one statistic of interest pertain. This editorial to the special issue on “Using Mobile Apps and Sensors in Surveys” provides an introduction to this new field, presents an overview of challenges, opportunities, and sets a research agenda. We introduce the four papers in this special issue that focus on these opportunities and challenges and provide practical applications and solutions for integrating sensor- and app-based data collection into surveys

    The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    No full text
    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey questions. The classic MTMM model is in this paper extended to include the effects of measurement bias and longitudinal nonresponse that occurs in longitudinal surveys. Measurement and nonresponse errors are expressed on a common metric in this model, so that their relative sizes can be assessed over the course of a panel study. Using an example about political trust from the Dutch LISS panel, we show that measurement problems lead to both small errors and small biases, that dropout in the panel study does not lead to errors or bias, and that therefore, measurement is a more important source of both error and bias than nonresponse

    Do shorter stated survey length and inclusion of a QR code in an invitation letter lead to better response rates?

    Get PDF
    Invitation letters to web surveys often contain information on how long it will take to complete a web survey. When the stated length in an invitation of a survey is short, it could help to convince respondents to participate in the survey. When it is long respondents may choose not to participate, and when the actual length is longer than the stated length there may be a risk of dropout. This paper reports on an Randomised Control Trial (RCT) conducted in a cross-sectional survey conducted in the Netherlands. The RCT included different version of the stated length of a survey and inclusion of a Quick Response (QR) code as ways to communicate to potential respondents that the survey was short or not. Results from the RCT show that there are no effects of the stated length on actual participation in the survey, nor do we find an effect on dropout. We do however find that inclusion of a QR code leads respondents to be more likely to use a smartphone, and find some evidence for a different composition of our respondent sample in terms of age

    The relative size of measurement error and attrition error in a panel survey. Comparing them with a new multi-trait multi-method model

    No full text
    This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey questions. The classic MTMM model is in this paper extended to include the effects of measurement bias and longitudinal nonresponse that occurs in longitudinal surveys. Measurement and nonresponse errors are expressed on a common metric in this model, so that their relative sizes can be assessed over the course of a panel study. Using an example about political trust from the Dutch LISS panel, we show that measurement problems lead to both small errors and small biases, that dropout in the panel study does not lead to errors or bias, and that therefore, measurement is a more important source of both error and bias than nonresponse
    • …
    corecore