24 research outputs found

    Good questions, bad questions? A Post-Survey Evaluation Strategy Based on Item Nonresponse

    Get PDF
    In this paper we discuss a three-step strategy to evaluate data quality in terms of item nonresponse and to identify potentially flawed questions. We provide an example with several data sets of a large-scale social scientific study to illustrate the application of the strategy and to highlight its benefits. In survey research it is common practice to test questions ex ante, for example by means of cognitive pretesting. Nevertheless, it is necessary to check the respondents’ response behavior throughout the questionnaire to evaluate the quality of the collected data. Articles addressing item nonresponse mostly focus on individuals or specific questions – adjusting the focus on the questionnaire as a whole seems to be a fruitful addition for survey methodology. Shifting the perspective enables us to identify problematic questions ex post and adjust the questionnaire or research design before re-applying it to further studies or to assess the data quality of a study. This need may arise from shortcomings or failures during the cognitive pretesting or as a result of unforeseen events during the data collection. Furthermore, result of this ex post analysis may be an integral part of data quality reports

    The Issue of Noncompliance in Attention Check Questions: False Positives in Instructed Response Items

    Get PDF
    Attention checks detect inattentiveness by instructing respondents to perform a specific task. However, while respondents may correctly process the task, they may choose to not comply with the instructions. We investigated the issue of noncompliance in attention checks in two web surveys. In Study 1, we measured respondents’ attitudes toward attention checks and their self-reported compliance. In Study 2, we experimentally varied the reasons given to respondents for conducting the attention check. Our results showed that while most respondents understand why attention checks are conducted, a nonnegligible proportion of respondents evaluated them as controlling or annoying. Most respondents passed the attention check; however, among those who failed the test, 61% seem to have failed the task deliberately. These findings reinforce that noncompliance is a serious concern with attention check instruments. The results of our experiment showed that more respondents passed the attention check if a comprehensible reason was given

    Effects of Respondent and Survey Characteristics on the Response Quality of an Open-Ended Attitude Question in Web Surveys

    Get PDF
    Open-ended questions have a great potential for analyses, but answering them often imposes a great burden on respondents. Relying on satisficing theory as an overarching theoretical framework, we derived several hypotheses about how respondent and survey level characteristics, and their interactions, might affect the quality of the responses to an open-ended attitude question in self-administered surveys. By applying multilevel analyses to data from 29 web surveys, we examined the effects of respondent and survey level characteristics on three indicators of response quality: response length, response latency, and the interpretability of the answers. With respect to all three indicators, we found that more educated and more motivated respondents provided answers of significantly better quality compared to other respondents. However, the present study provides evidence that analyzing response quality exclusively with process-generated measures of quality may produce a misleading picture. Therefore, the addition of content-related indicators, such as the interpretability of responses, provides a more informative result. We found that the further the open-ended question was located towards the end of the questionnaire, the fewer interpretable answers were given. Our results also indicated that if the survey was carried out in close proximity to a federal election, responses were more likely to be interpretable. Overall, our study suggests that the characteristics at the respondent and survey levels influence the response quality of open-ended attitude questions and that these characteristics interact to a small degree

    Working with User Agent Strings in Stata: The parseuas Command

    Get PDF
    With the rising popularity of web surveys and the increasing use of paradata by survey methodologists, assessing information stored in user agent strings becomes inevitable. These data contain meaningful information about the browser, operating system, and device that a survey respondent uses. This article provides an overview of user agent strings, their specific structure and history, how they can be obtained when conducting a web survey, as well as what kind of information can be extracted from the strings. Further, the user written command parseuas is introduced as an efficient means to gather detailed information from user agent strings. The application of parseuas is illustrated by an example that draws on a pooled data set consisting of 29 web surveys

    'Show me the money and the party!' - variation in Facebook and Twitter adoption by politicians

    Full text link
    Our study explores the adoption of Facebook and Twitter by candidates in the 2013 German Federal elections. Utilizing data from the German Longitudinal Election Study candidate survey fused with data gathered on the Twitter and Facebook use of candidates, we draw a clear distinction between Facebook and Twitter. We show that adoption of both channels is primarily driven by two factors: party and money. But the impact of each plays out differently for Facebook and Twitter. While the influence of money is homogenous for Facebook and Twitter with the more resources candidates have, the more likely they are to adopt, the effect is stronger for Facebook. Conversely, a party's impact on adoption is heterogeneous across channels, a pattern we suggest is driven by the different audiences Facebook and Twitter attract. We also find candidates' personality traits only correlate with Twitter adoption, but their impact is minimal. Our findings demonstrate that social media adoption by politicians is far from homogenous, and that there is a need to differentiate social media channels from one another when exploring motivations for their use

    Informing about Web Paradata Collection and Use (Version 1.0)

    Get PDF
    This survey guideline addresses the practical question of how best to inform survey participants about the collection and use of paradata in web surveys. We provide an overview of different personal and non-personal web paradata and the associated information and consent requirements. Best practices regarding the procedure, wording, and placement of non-personal web paradata information are discussed. In addition, we propose a sample wording for web paradata information in German and English
    corecore