34 research outputs found

    Is Vague Valid? The Comparative Predictive Validity of Vague Quantifiers and Numeric Response Options

    Get PDF
    A number of surveys, including many student surveys, rely on vague quantifiers to measure behaviors important in evaluation. The ability of vague quantifiers to provide valid information, particularly compared to other measures of behaviors, has been questioned both within both survey research generally and educational research specifically. Still, there is a dearth of research on whether vague quantifiers or numeric responses perform better in regards to validity. This study examines measurement properties of frequency estimation questions through the assessment of predictive validity, which has also been shown to be important in examining measurement properties of competing question formats. Data from the National Survey of Student Engagement (NSSE), a preeminent survey of university students, is analyzed, in which two psychometrically tested benchmark scales, active and collaborative learning and student-faculty interaction, are measured through both vague quantifier and numeric responses. Predictive validity is assessed through correlations and regression models relating both vague and numeric scales to grades in school and two education experience satisfaction measures. Results support the view that the predictive validity is higher for vague quantifier scales, and hence better measurement properties, compared to numeric responses. These results are discussed in light of other findings on measurement properties of vague quantifiers and numeric responses, suggesting that vague quantifiers may be a useful measurement tool for behavioral data, particularly when it is the relationship between variables that are of interest

    Using Motivational Statements in Web-Instrument Design to Reduce Item-Missing Rates in a Mixed-Mode Context

    Get PDF
    Web questionnaires, including those used in mixed-mode surveys, generally produce higher levels of item nonresponse than interviewer-administered questionnaires. Item nonresponse is generally seen as having a detrimental impact on data quality. The current study examines using motivational statements to reduce item nonresponse in a web survey component of a mixed-mode design. The effects of alternative implementations are compared, both for web surveys and for mixed-mode surveys. In addition, the mixed-mode results are compared to a face-to-face survey. The current study adds to the literature on the use of motivational statements by using a unique large-scale randomized experiment to examine the impact of the timing of the motivational statement, and to compare with the same survey in an interviewer-administered context. Findings show that a motivational statement following immediately after an item is left unanswered greatly outperforms either the control or a motivational statement at a later point in the survey. Using this immediate reactive prompt reduces item nonresponse to levels equivalent to a face-to-face version. Conversely, the control (no statement) and later placed motivational statement lead to significantly greater item nonresponse. Point estimates for the tested variables are not affected by the additional responses obtained. The results suggest practical design implications to reduce item nonresponse when using a web design, specifically the use of a reactive motivational prompt in a planned way

    The stability of mode preferences: implications for tailoring in longitudinal surveys

    Get PDF
    "One suggested tailoring strategy for longitudinal surveys is giving respondents their preferred mode. Mode preference could be collected at earlier waves and used when introducing a mixed-mode design. The utility of mode preference is in question, however, due to a number of findings suggesting that preference is an artefact of mode of survey completion, and heavily affected by contextual factors. Conversely, recent findings suggest that tailoring on mode preference may lead to improved response outcomes and data quality. The current study aims to ascertain whether mode preference is a meaningful construct with utility in longitudinal surveys through analysis of data providing three important features: multiple measurements of mode preference over time; an experiment in mode preference question order; and the repeated measures within respondents collected both prior and after the introduction of mixed-mode data collection. Results show that mode preference is not a stable attitude for a large percentage of respondents, and that these responses are affected by contextual factors. However, a substantial percentage of respondents do provide stable responses over time, and may explain the positive findings elsewhere. Using mode preference to tailor longitudinal surveys should be done so with caution, but may be useful with further understanding." (author's abstract

    Language proficiency among respondents: implications for data quality in a longitudinal face-to-face survey

    Get PDF
    When surveying immigrant populations or ethnic minority groups, it is important for survey researchers to consider that respondents might vary in their level of language proficiency. While survey translations might be offered, they are usually available for a limited number of languages, and even then, non-native speakers may not utilize questionnaires translated into their native language. This article examines the impact of language proficiency among respondents interviewed in English on survey data quality. We use data from Understanding Society: The United Kingdom Household Longitudinal Study (UKHLS) to examine five indicators of data quality, including “don’t know” responding, primacy effects, straightlining in grids, nonresponse to a self-completion survey component, and change in response across survey waves. Respondents were asked whether they are native speakers of English; non-native speakers were subsequently asked to self-rate whether they have any difficulties speaking or reading English. Results suggest that non-native speakers provide lower data quality for four of the five quality indicators we examined. We find that non-native respondents have higher nonresponse rates to the self-completion section and are more likely to report change across waves, select the primary response option, and show straightlining response behavior in grids. Furthermore, primacy effects and nonresponse rates to the self-completion section vary by self-rated level of language proficiency. No significant effects were found with regard to “don’t know” responding between native and non-native speakers

    Linking Twitter and Survey Data: The Impact of Survey Mode and Demographics on Consent Rates Across Three UK Studies

    Get PDF
    In light of issues such as increasing unit nonresponse in surveys, several studies argue that social media sources such as Twitter can be used as a viable alternative. However, there are also a number of shortcomings with Twitter data such as questions about its representativeness of the wider population and the inability to validate whose data you are collecting. A useful way forward could be to combine survey and Twitter data to supplement and improve both. To do so, consent within a survey is first needed. This study explores the consent decisions in three large representative surveys of the adult British population to link Twitter data to survey responses and the impact that demographics and survey mode have on these outcomes. Findings suggest that consent rates for data linkage are relatively low, and this is in part mediated by mode, where face-to-face surveys have higher consent rates than web versions. These findings are important to understand the potential for linking Twitter and survey data but also to the consent literature generally

    Linking Twitter and survey data: asymmetry in quantity and its impact

    Get PDF
    Linked social media and survey data have the potential to be a unique source of information for social research. While the potential usefulness of this methodology is widely acknowledged, very few studies have explored methodological aspects of such linkage. Respondents produce planned amounts of survey data, but highly variant amounts of social media data. This study explores this asymmetry by examining the amount of social media data available to link to surveys. The extent of variation in the amount of data collected from social media could affect the ability to derive meaningful linked indicators and could introduce possible biases. Linked Twitter data from respondents to two longitudinal surveys representative of Great Britain, the Innovation Panel and the NatCen Panel, show that there is indeed substantial variation in the number of tweets posted and the number of followers and friends respondents have. Multivariate analyses of both data sources show that only a few respondent characteristics have a statistically significant effect on the number of tweets posted, with the number of followers being the strongest predictor of posting in both panels, women posting less than men, and some evidence that people with higher education post less, but only in the Innovation Panel. We use sentiment analyses of tweets to provide an example of how the amount of Twitter data collected can impact outcomes using these linked data sources. Results show that more negatively coded tweets are related to general happiness, but not the number of positive tweets. Taken together, the findings suggest that the amount of data collected from social media which can be linked to surveys is an important factor to consider and indicate the potential for such linked data sources in social research

    Linking Survey and Twitter Data: Informed Consent, Disclosure, Security and Archiving

    Get PDF
    Linked survey and Twitter data present an unprecedented opportunity for social scientific analysis, but the ethical implications for such work are complex – requiring a deeper understanding of the nature and composition of Twitter data to fully appreciate the risks of disclosure and harm to participants. In this paper we draw on our experience of three recent linked data studies, briefly discussing the background research on data linkage and the complications around ensuring informed consent. Particular attention is paid to the vast array of data available from Twitter and in what manner it might be disclosive. In light of this, the issues of maintaining security, minimising risk, archiving and re-use are applied to linked Twitter and survey data. In the conclusion we reflect on how our ability to collect and work with Twitter data has outpaced our technical understandings of how the data is constituted and observe that understanding one’s data is an essential prerequisite for ensuring best ethical practice

    Informed consent for linking survey and social media data - fifferences between platforms and data types

    Get PDF
    Linking social media data with survey data is a way to combine the unique strengths and address some of the respective limitations of these two data types. As such linked data can be quite disclosive and potentially sensitive, it is important that researchers obtain informed consent from the individuals whose data are being linked. When formulating appropriate informed consent, there are several things that researchers need to take into account. Besides legal and ethical questions, key aspects to consider are the differences between platforms and data types. Depending on what type of social media data is collected, how the data are collected, and from which platform(s), different points need to be addressed in the informed consent. In this paper, we present three case studies in which survey data were linked with data from 1) Twitter, 2) Facebook, and 3) LinkedIn and discuss how the specific features of the platforms and data collection methods were covered in the informed consent. We compare the key attributes of these platforms that are relevant for the formulation of informed consent and also discuss scenarios of social media data collection and linking in which obtaining informed consent is not necessary. By presenting the specific case studies as well as general considerations, this paper is meant to provide guidance on informed consent for linked survey and social media data for both researchers and archivists working with this type of data

    Examining household effects on individual Twitter adoption: A multilevel analysis based on U.K. household survey data

    Get PDF
    Previous studies mainly focused on individual-level factors that influence the adoption and usage of mobile technology and social networking sites, with little emphasis paid to the influences of household situations. Using multilevel modelling approach, this study merges household- (n1 = 1,455) and individual-level (n2 = 2,570) data in the U.K. context to investigate (a) whether a household economic capital (HEC) can affect its members’ Twitter adoption, (b) whether the influences are mediated by the member’s activity variety and self-reported efficacy with mobile technology, and (c) whether the members’ traits, including educational level, gross income and residential area, moderate the relationship between HEC and Twitter adoption. Significant direct and indirect associations were discovered between HEC and its members’ Twitter adoption. The educational level and gross income of household members moderated the influence of HEC on individuals’ Twitter adoption

    Linking survey with Twitter data: Examining associations among smartphone usage, privacy concern and Twitter linkage consent

    Get PDF
    Linking survey and social media data has gained popularity. However, obtaining consent from respondents to link social media is a known challenge. Using data from a nationally representative survey of the U.K. this study investigated whether respondents’ a) activity frequency, b) activity variety and c) technical skills with smartphones are associated with consent to link Twitter data to survey responses. Additionally, this study explored mediating role of privacy and security concern and moderating effects of age, gender, employment and educational level to better understand the influences of privacy concern on Twitter linkage consent. Results showed that activity variety with smartphones is positively associated with Twitter linkage consent, and privacy concern mediated the effects of activity frequency and activity variety with smartphones on linkage consent. Age and employment status moderated the associations between privacy concern and linkage consent, with younger and employed respondents being more likely to be affected by privacy concern
    corecore