180 research outputs found

    Consequences of mid-stream mode-switching in a panel survey

    Get PDF
    Face-to-face (F2F) interviews produce population estimates that are widely regarded as the ‘gold standard’ in social research. Response rates tend to be higher with face-to-face interviews than other modes and face-to-face interviewers can exploit both spoken and visual information about the respondent’s performance to help assure high quality data. However, with marginal costs per respondent much higher for F2F than online data collection, survey researchers are looking for ways to exploit these lower costs with minimum loss of data quality. In panel studies, one way of doing this is to recruit probability samples F2F and subsequently switch data collection to web mode. In this paper, we examine the effect on data quality of inviting a subsample of respondents in a probability-based panel survey to complete interviews on the web instead of F2F. We use accuracy of respondents’ recall of facts and subjective states over a five-year period in the areas of health and employment as indicators of data quality with which we can compare switching and non-switching respondents. We find evidence of only small differences in recall accuracy across modes and attribute this mainly to selection effects rather than measurement effects

    Sources of error in mobile survey data collection

    Get PDF
    The proliferation of mobile technologies in the general population offers new opportunities for survey research, but also introduces new sources of error to the data collection process. This thesis studies two potential sources of error in mobile survey data collection: measurement error and nonresponse. Chapter 1 examines how the diagonal screen size of a mobile device affects measurement error. Using data from a non-mobile-optimised web survey, I compare data quality between screen size groups. Results suggest that data quality mainly differs between small smartphones with a screen size of below 4.0 inches and larger mobile devices. Respondents using small smartphones are more likely to break off during the survey, to provide shorter answers to open-ended questions, and to select fewer items in check-all-that-apply questions than respondents using devices with larger screens. Due to the portability of mobile devices, mobile web respondents are more likely to be in distracting environments where other people are present. Chapter 2 explores how distractions during web survey completion influence measurement error. I conducted a laboratory experiment where participants were randomly assigned to devices (PC or tablet) and to one of three distraction conditions (presence of other people who have a loud conversation, presence of music, or no distraction). Although respondents felt more distracted in the two distraction conditions, I did not find significant effects of distraction on data quality. Chapter 3 investigates correlates of nonresponse to data collection using mobile technologies. We asked members of a probability household panel about their willingness to participate in various data collection tasks on their mobile device. We find that willingness varies considerably by the type of activity involved, to some extent by device, and by respondent: those who report higher security concerns and who use their device less intensively are less willing to participate in mobile data collection

    Willingness to use mobile technologies for data collection in a probability household panel

    Get PDF
    We asked members of the Understanding Society Innovation Panel about their willingness to participate in various data collection tasks on their mobile devices. We find that stated willingness varies considerably depending on the type of activity involved: respondents are less willing to participate in tasks that involve downloading and installing an app, or where data are collected passively. Stated willingness also varies between smartphones and tablets, and between types of respondents: respondents who report higher concerns about the security of data collected with mobile technologies and those who use their devices less intensively are less willing to participate in mobile data collection tasks

    The effects of personalized feedback on participation and reporting in mobile app data collection

    Get PDF
    Offering participants in mobile app studies personalized feedback on the data they report seems an obvious thing to do: participants might expect an app to provide feedback given their experiences with commercial apps, feedback might motivate more people to participate in the study, and participants might be more motivated to provide accurate data so that the feedback is more useful to them. However, personalized feedback might lead participants to change the behaviour that is being measured with the app, implementing feedback is costly, and also constrains other design decisions for the data collection. In this paper, we report on an experimental study that tested the effects of providing personalized feedback in a one-month mobile app-based spending study. Based on the app paradata and responses to a debrief survey, it seems that participants reacted positively to the feedback. The feedback did not have the potential negative effect of altering the spending participants reported in the app. However, the feedback also did not have the intended effect of increasing initial participation or ongoing adherence to the study protocol

    The second-level smartphone divide: A typology of smartphone use based on frequency of use, skills, and types of activities

    Get PDF
    This paper examines inequalities in the usage of smartphone technology based on five samples of smartphone owners collected in Germany and Austria between 2016 and 2020. We identify six distinct types of smartphone users by conducting latent class analyses that classify individuals based on their frequency of smartphone use, self-rated smartphone skills, and activities carried out on their smartphone. The results show that the smartphone usage types differ significantly by sociodemographic and smartphone-related characteristics: The types reflecting more frequent and diverse smartphone use are younger, have higher levels of educational attainment, and are more likely to use an iPhone. Overall, the composition of the latent classes and their characteristics are robust across samples and time

    Studying health-related internet and mobile device use using web logs and smartphone records

    Get PDF
    Many people use the internet to seek information that will help them understand their body and their health. Motivations for such behaviors are numerous. For example, users may wish to figure out a medical condition by searching for symptoms they experience. Similarly, they may seek more information on how to treat conditions they have been diagnosed with or seek resources on how to live a healthy life. With the ubiquitous availability of the internet, searching and finding relevant information is easier than ever before and a widespread phenomenon. To understand how people use the internet for health-related information, we use data from a sample of 1,959 internet users. A unique combination of data containing four months of users' browsing histories and mobile application use on computers and mobile devices allows us to study which health websites they visited, what information they searched for and which health applications they used. Survey data inform us about users' socio-demographic background, medical conditions and other health-related behaviors. Results show that women, young users, users with a university education and nonsmokers are most likely to use the internet and mobile applications for health-related purposes. On search engines, internet users most frequently search for pharmacies, symptoms of medical conditions and pain. Moreover, users seem most interested in information on how to live a healthy life, alternative medicine, mental health and women's health. With this study, we extend the field's understanding of who seeks and consumes health information online, what users look for as well as how individuals use mobile applications to monitor their health. Moreover, we contribute to methodological research by exploring new sources of data for understanding humans, their preferences and behaviors

    Language proficiency among respondents: implications for data quality in a longitudinal face-to-face survey

    Get PDF
    When surveying immigrant populations or ethnic minority groups, it is important for survey researchers to consider that respondents might vary in their level of language proficiency. While survey translations might be offered, they are usually available for a limited number of languages, and even then, non-native speakers may not utilize questionnaires translated into their native language. This article examines the impact of language proficiency among respondents interviewed in English on survey data quality. We use data from Understanding Society: The United Kingdom Household Longitudinal Study (UKHLS) to examine five indicators of data quality, including “don’t know” responding, primacy effects, straightlining in grids, nonresponse to a self-completion survey component, and change in response across survey waves. Respondents were asked whether they are native speakers of English; non-native speakers were subsequently asked to self-rate whether they have any difficulties speaking or reading English. Results suggest that non-native speakers provide lower data quality for four of the five quality indicators we examined. We find that non-native respondents have higher nonresponse rates to the self-completion section and are more likely to report change across waves, select the primary response option, and show straightlining response behavior in grids. Furthermore, primacy effects and nonresponse rates to the self-completion section vary by self-rated level of language proficiency. No significant effects were found with regard to “don’t know” responding between native and non-native speakers

    Linking Twitter and survey data: asymmetry in quantity and its impact

    Get PDF
    Linked social media and survey data have the potential to be a unique source of information for social research. While the potential usefulness of this methodology is widely acknowledged, very few studies have explored methodological aspects of such linkage. Respondents produce planned amounts of survey data, but highly variant amounts of social media data. This study explores this asymmetry by examining the amount of social media data available to link to surveys. The extent of variation in the amount of data collected from social media could affect the ability to derive meaningful linked indicators and could introduce possible biases. Linked Twitter data from respondents to two longitudinal surveys representative of Great Britain, the Innovation Panel and the NatCen Panel, show that there is indeed substantial variation in the number of tweets posted and the number of followers and friends respondents have. Multivariate analyses of both data sources show that only a few respondent characteristics have a statistically significant effect on the number of tweets posted, with the number of followers being the strongest predictor of posting in both panels, women posting less than men, and some evidence that people with higher education post less, but only in the Innovation Panel. We use sentiment analyses of tweets to provide an example of how the amount of Twitter data collected can impact outcomes using these linked data sources. Results show that more negatively coded tweets are related to general happiness, but not the number of positive tweets. Taken together, the findings suggest that the amount of data collected from social media which can be linked to surveys is an important factor to consider and indicate the potential for such linked data sources in social research

    Increasing Participation in a Mobile App Study: The Effects of a Sequential Mixed-Mode Design and In-Interview Invitation

    Get PDF
    Mobile apps are an attractive and versatile method of collecting data in the social and behavioural sciences. In samples of the general population, however, participation in app-based data collection is still rather low. In this paper, we examine two potential ways of increasing participation and potentially reducing participation bias in app-based data collection: 1) inviting sample members to a mobile app study within an interview rather than by post, and 2) offering a browser-based follow-up to the mobile app. We use experimental data from Spending Study 2, collected on the Understanding Society Innovation Panel and on the Lightspeed UK online access panel. Sample members were invited to download a spending diary app on their smartphone or use a browser-based online diary to report all their purchases for one month. The results suggest that inviting sample members to an app study within a face-to-face interview increases participation rates but does not bring in different types of participants. In contrast, the browser-based alternative can both increase participation rates and reduce biases in who participates if offered immediately once the app had been declined. We find that the success of using mobile apps for data collection hinges on the protocols used to implement the app
    • 

    corecore