12 research outputs found

    Respondent’s Answering Time for Question as a Factor of Data Quality Estimation

    Get PDF
    Комп’ютерні засоби анкетування уможливили фіксацію часу відповідей на запитання як додаткового фактора оцінки їх якості. У межах статті запропоновано підхід до оцінювання якості відповіді на запитання за допомогою оцінювання порогових значень часових характеристик відповідей на віньєтки в он-лайн-опитуванні, здійсненому із застосуванням факторіального дизайну в програмі R. Для цього використано відповіді на серію з п’ти віньєток з описом складних експериментальних ситуацій і мінімальним їх прочитанням за 10 секунд, отримані під час суцільного опитування студентів факультету соціології Київського національного університету імені Тараса Шевченка в он-лайн-оболонці LimeSurvey (2015). Побудову та аналіз довірчого інтервалу за варіаційним рядом або довірчого інтервалу для медіани застосовано як метод статистичного усунення викидів. Відмінність між моделлю, побудованою для вказаного часового інтервалу, і загальною повною моделлю не є надто великою. Значущу відмінність між розподілом часу простежено лише між першою віньєткою та всіма іншими: віньєтки є однотипними, тому час витрачається на побудову розуміння експериментальної ситуації лише для першої віньєтки, а всі інші вже сприймаються «за аналогією». Наповненість часових інтервалів за статтю показує, що частка жінок, які дали швидку відповідь, із кожною наступною віньєткою зменшується, а чоловіків – навпаки, збільшується. Імовірно, чоловіки менш ретельно ставляться до завдань, їм швидше набридає читати віньєтки зі схожими експериментальними ситуаціями. Обумовленість часу відповіді на запитання іншими факторами – структура та складність запитання, особисті якості респондента тощо – потребує додаткових досліджень, зокрема із застосуванням регресійного аналізу. Computer assisted tools for survey conduction enabled a record a lot of additional information, such as questions answering time as an additional factor to assess data quality. Within the article, the author offers an approach to evaluate the quality of data by evaluating thresholds of temporal characteristics of responses to vignettes in an online poll created with factorial design in program R. We used for this answers to a series of five vignettes describing complex experimental situations and minimal their reading in at least 10 seconds, obtained in a solid survey of students of sociology faculty, Taras Shevchenko National University of Kyiv in on-line shell LimeSurvey (2015). Construction and analysis of the confidence interval for the sample or the confidence interval for median author used as statistical method for eliminating the outliers. The difference between the model constructed for a specified time interval, and total full model is not too large. Significant difference between the distribution time is observed only between 1st and all other vignettes: vignettes are homogeneous, so the time spent on building understanding experimental situation only for 1st vignette, and everyone else has seen «by analogy». Fullness of time intervals by gender shows that the proportion of women who gave a quick response, with every vignette is reduced, man, on the contrary – are increasing. Perhaps men are less closely to the tasks they become more bored to read vignettes with similar experimental situations. Because the answer time of question depend of a lot of different factors – the structure and complexity of the questions, the personal qualities of the respondent, etc. – require additional research, including using regression analysis

    Adapting Surveys to the Modern World:Comparing a Research Messenger Design to a Regular Responsive Design for Online Surveys

    Get PDF
    Online surveys are increasingly completed on smartphones. There are several ways to structure online surveys so as to create an optimal experience for any screen size. For example, communicating through applications (apps) such as WhatsApp and Snapchat closely resembles natural turn-by-turn conversations between individuals. Web surveys currently mimic the design of paper questionnaires mostly, leading to a survey experience that may not be optimal when completed on smartphones. In this paper, we compare a research messenger design, which mimics a messenger app type of communication, to a responsive survey design. We investigate whether response quality is similar between the two designs and whether respondents' satisfaction with the survey is higher for either version. Our results show no differences for primacy effects, number of nonsubstantive answers, and dropout rate. The length of open-ended answers was shorter for the research messenger survey compared to the responsive design, and the overall time of completion was longer in the research messenger survey. The evaluation at the end of the survey showed no clear indication that respondents liked the research messenger survey more than the responsive design. Future research should focus on how to optimally design online mixed-device surveys in order to increase respondent satisfaction and data quality

    Relying on External Information Sources When Answering Knowledge Questions in Web Surveys

    Full text link
    Knowledge questions frequently are used in survey research to measure respondents’ topic-related cognitive ability and memory. However, in self-administered surveys, respondents can search external sources for additional information to answer a knowledge question correctly. In this case, the knowledge question measures accessible and procedural memory. Depending on what the knowledge question aims at, the validity of this measure is limited. Thus, in this study, we conducted three experiments using a web survey to investigate the effects of task difficulty, respondents’ ability, and respondents’ motivation on the likelihood of searching external sources for additional information as a form of over-optimizing response behavior when answering knowledge questions. We found that the respondents who are highly educated and more interested in a survey are more likely to invest additional efforts to answer knowledge questions correctly. Most importantly, our data showed that for these respondents, a more difficult question design further increases the likelihood of over-optimizing response behavior

    Параметры проверки и контроля качества онлайн-опроса с использованием параданных

    Get PDF
    В статье рассмотрены типы параданных, доступных по ходу реализации онлайн-опроса, продемонстрированы их возможности для оценки качества проведенного исследования. Структура работы состоит из перечисления параметров параданных в онлайн-опросе по основным ошибкам исследования, начиная с ошибки измерения и заканчивая ошибками неответов. Автор рассматривает доступные специалистам показатели качества и указывает на их роль в социологических программах сбора данных. В статье намечены перспективы включения параданных в аналитический обзор процедур проведения онлайн-опроса и методологического описания исследовательского подхода. Такая практика расширяет возможности получения достоверных результатов и надежной социологической информации

    New insights on respondents' recall ability and memory effects when repeatedly measuring political efficacy

    Get PDF
    Many study designs in social science research rely on repeated measurements implying that the same respondents are asked the same (or nearly the same) questions at least twice. An assumption made by such study designs is that respondents second answer does not depend on their first answer. However, if respondents recall their initial answer and base their second answer on it memory effects may affect the survey outcome. In this study, I investigate respondents' recall ability and memory effects within the same survey and randomly assign respondents to a device type (PC or smartphone) and a response format (response scale or text field) for reporting their previous answer. While the results reveal no differences regarding device types, they reveal differences regarding response formats. Respondents’ recall ability is higher when they are provided with the response scale again than when they are only provided with a text field (without displaying the response scale again). The same finding applies to the size of estimated memory effects. This study provides evidence that the size of memory effects may have been overestimated in previous studies

    Predicting question difficulty in web surveys: A machine learning approach based on mouse movement features

    Get PDF
    Survey research aims to collect robust and reliable data from respondents. However, despite researchers’ efforts in designing questionnaires, survey instruments may be imperfect, and question structure not as clear as could be, thus creating a burden for respondents. If it were possible to detect such problems, this knowledge could be used to predict problems in a questionnaire during pretesting, inform real-time interventions through responsive questionnaire design, or to indicate and correct measurement error after the fact. Previous research has used paradata, specifically response times, to detect difficulties and help improve user experience and data quality. Today, richer data sources are available, for example, movements respondents make with their mouse, as an additional detailed indicator for the respondent–survey interaction. This article uses machine learning techniques to explore the predictive value of mouse-tracking data regarding a question’s difficulty. We use data from a survey on respondents’ employment history and demographic information, in which we experimentally manipulate the difficulty of several questions. Using measures derived from mouse movements, we predict whether respondents have answered the easy or difficult version of a question, using and comparing several state-of-the-art supervised learning methods. We have also developed a personalization method that adjusts for respondents’ baseline mouse behavior and evaluate its performance. For all three manipulated survey questions, we find that including the full set of mouse movement measures and accounting for individual differences in these measures improve prediction performance over response-time-only models.German Research Foundation (DFG)Peer Reviewe

    Investigating respondent multitasking in web surveys using paradata

    Full text link
    Computers play an important role in everyday multitasking. Within this context, we focus on respondent multitasking (RM) in web surveys. RM occurs when users engage in other activities while responding to a web survey questionnaire. The conceptual framework is built on existing literature on multitasking, integrating knowledge from both cognitive psychology and survey methodology. Our main contribution is a new approach for measuring RM in web surveys, which involves an innovative use of the different types of paradata defined as non-reactive electronic tracks concerning respondents\u27 process of answering the web questionnaire. In addition to using questionnaire page completion time as a measure of RM, we introduce "focus-out" events that indicate when respondents have left the window containing the web questionnaire (e.g., to chat, email, browse) and then returned. The approach was tested in an empirical study using a web survey on a student sample (n=267). The results indicate that 60% of respondents have multitasked at least once. In addition, they reveal that item nonresponse as an indicator of response quality is associated with RM, while non-differentiation is not. Although this study confirms that a paradata-based approach is a feasible means of measuring RM, future research on this topic is warranted

    Opportunities and challenges in new survey data collection methods using apps and images.

    Get PDF
    Surveys are well established as an effective way of collecting social science data. However, they may lack the detail, or not measure the concepts, necessary to answer a wide array of social science questions. Supplementing survey data with data from other sources offer opportunities to overcome this. The use of mobile technologies offers many such new opportunities for data collection. New types of data might be able to be collected, or it may be possible to collect existing data types in new and innovative ways .As well as these new opportunities, there are new challenges. Again, these can both be unique to mobile data collection, or existing data collection challenges that are altered by using mobile devices to collect the data.The data used is from a study that makes use of an app for mobile devices to collect data about household spending, the Understanding Society Spending Study One. Participants were asked to report their spending by submitting a photo of a receipt, entering information about a purchase manually, or reporting that they had not spent anything that day. Each substantive chapter offers a piece of research exploring a different challenge posed by this particular research context. Chapter one explores the challenge presented by respondent burden in the context of mobile data collection. Chapter two considers the challenge of device effects. Chapter three examines the challenge of coding large volumes of organic data. The thesis concludes by reflecting on how the lessons learnt throughout might inform survey practice moving forward. Whilst this research focuses on one particular application it is hoped that this serves as a microcosm for contributing to the discussion of the wider opportunities and challenges faced by survey research as a field moving forward

    Sources of error in mobile survey data collection

    Get PDF
    The proliferation of mobile technologies in the general population offers new opportunities for survey research, but also introduces new sources of error to the data collection process. This thesis studies two potential sources of error in mobile survey data collection: measurement error and nonresponse. Chapter 1 examines how the diagonal screen size of a mobile device affects measurement error. Using data from a non-mobile-optimised web survey, I compare data quality between screen size groups. Results suggest that data quality mainly differs between small smartphones with a screen size of below 4.0 inches and larger mobile devices. Respondents using small smartphones are more likely to break off during the survey, to provide shorter answers to open-ended questions, and to select fewer items in check-all-that-apply questions than respondents using devices with larger screens. Due to the portability of mobile devices, mobile web respondents are more likely to be in distracting environments where other people are present. Chapter 2 explores how distractions during web survey completion influence measurement error. I conducted a laboratory experiment where participants were randomly assigned to devices (PC or tablet) and to one of three distraction conditions (presence of other people who have a loud conversation, presence of music, or no distraction). Although respondents felt more distracted in the two distraction conditions, I did not find significant effects of distraction on data quality. Chapter 3 investigates correlates of nonresponse to data collection using mobile technologies. We asked members of a probability household panel about their willingness to participate in various data collection tasks on their mobile device. We find that willingness varies considerably by the type of activity involved, to some extent by device, and by respondent: those who report higher security concerns and who use their device less intensively are less willing to participate in mobile data collection

    Student Conduct Administrators\u27 Perceptions of Effective Sanctions That Reduce Recidivism of Alcohol Violations Among College Students

    Get PDF
    Recent researchers have found that when alcohol use and/or abuse is a factor in an undergraduate students\u27 college experience, there is a substantial increase in dependence, decreased academic productivity, an increase in safety and security issues, an increase in suicide ideation and attempts, unprotected sexual encounters, and physical assaults that result in injuries (Amaro et al., 2010). One of the most effective ways that institutions in higher education can combat alcohol-related issues on their campuses is for the institutional leaders to play a role in addressing this issue (Busteed, 2008). In many institutions of higher education, student conduct administrators have been designated as those institutional leaders with the responsibility of addressing alcohol policy violations and establishing a reasonable balance between disciplinary and educational sanctions issued to students (Waryold & Lancaster, 2013). The primary purpose of this research study was to evaluate student conduct administrators\u27 perceptions of the relationship between recidivism and sanctions for alcohol violations at their colleges and universities. More specifically, this study explored the relationship of sanctions that students must complete after having been found responsible for violating the university\u27s alcohol policy
    corecore