7,659 research outputs found

    Video in Survey Interviews: Effects on Data Quality and Respondent Experience

    Get PDF
    This study investigates the extent to which video technologies - now ubiquitous - might be useful for survey measurement. We compare respondents' performance and experience (n = 1,067) in live video-mediated interviews, a web survey in which prerecorded interview­ers read questions, and a conventional (textual) web survey. Compared to web survey re­spondents, those interviewed via live video were less likely to select the same response for all statements in a battery (non-differentiation) and reported higher satisfaction with their experience but provided more rounded numerical (presumably less thoughtful) answers and selected answers that were less sensitive (more socially desirable). This suggests the presence of a live interviewer, even if mediated, can keep respondents motivated and con­scientious but may introduce time pressure - a likely reason for increased rounding - and social presence - a likely reason for more socially desirable responding. Respondents "in­terviewed" by a prerecorded interviewer, rounded fewer numerical answers and responded more candidly than did those in the other modes, but engaged in non-differentiation more than did live video respondents, suggesting there are advantages and disadvantages for both video modes. Both live and prerecorded video seem potentially viable for use in pro­duction surveys and may be especially valuable when in-person interviews are not feasible

    Using Internet in Stated Preference Surveys: A Review and Comparison of Survey Modes

    Get PDF
    Internet is quickly becoming the survey mode of choice for stated preference (SP) surveys in environmental economics. However, this choice is being made with relatively little consideration of its potential influence on survey results. This paper reviews the theory and emerging evidence of mode effects in the survey methodology and SP literatures, summarizes the findings, and points out implications for Internet SP practice and research. The SP studies that compare Internet with other modes do generally not find substantial difference. The majority of welfare estimates are equal; or somewhat lower for the Internet surveys. Further, there is no clear evidence of substantially lower quality or validity of Internet responses. However, the degree of experimental control is often low in comparative studies across survey modes, and they often confound measurement and sample composition effects. Internet offers a huge potential for experimentation and innovation in SP research, but when used to derive reliable welfare estimates for policy assessment, issues like representation and nonresponse bias for different Internet panels should receive more attention.Internet; survey mode; contingent valuation; stated preferences

    Can cheap panel-based internet surveys substitute costly in-person interviews in CV surveys?

    Get PDF
    With the current growth in broadband penetration, Internet is likely to be the data collection mode of choice for stated preference research in the not so distant future. However, little is known about how this survey mode may influence data quality and welfare estimates. In a first controlled field experiment to date as part of a national contingent valuation (CV) survey estimating willingness to pay (WTP) for biodiversity protection plans, we assign two groups sampled from the same panel of respondents either to an Internet or in-person (in-house) interview mode. Our design is better able than previous studies to isolate measurement effects from sample composition effects. We find little evidence of social desirability bias in the in-person interview setting or satisficing (shortcutting the response process) in the Internet survey. The share of “don’t knows”, zeros and protest responses to the WTP question with a payment card is very similar between modes. Equality of mean WTP between samples cannot be rejected. Considering equivalence, we can reject that mean WTP from the in-person sample is more than 30% higher. Results are quite encouraging for the use of Internet in CV as stated preferences do not seem to be significantly different or biased compared to in-person interviews.Internet; contingent valuation; interviews; survey mode; willingness to pay

    SOCIAL DESIRABILITY BIAS IN SOFTWARE PIRACY RESEARCH

    Get PDF
    Most behavioural aspects of software piracy research are a subset of ethical research. Measures of ethical behaviour in research may be subject to biases in response to the social desirability of behaviours. Few studies in the area of software piracy have explicitly addressed this issu. Literature on social desirability bias (SDB) reports on three ways to address response bias: approaches to reduce bias, approaches to detect bias, and approaches to correct bias. In the current article, the published methods to reduce, detect, and, correct bias are reviewed. Then, the extent of SDB that may be present in the published software piracy literature is subjectively assessed. A study is proposed in which piracy behaviours involving real money are compared to the intent to pirate in paper-based scenarios, under equivalent conditions. The comparison is argud to be useful in compensating for SDB in future research

    Eye-tracking Social Desirability Bias

    Get PDF
    Eye tracking is now a common technique studying the moment-by-moment cognition of those processing visual information. Yet this technique has rarely been applied to different survey modes. Our paper uses an innovative method of real-world eye tracking to look at attention to sensitive questions and response scale points, in Web, face-to-face and paper-and-pencil self-administered (SAQ) modes. We link gaze duration to responses in order to understand how respondents arrive at socially desirable or undesirable answers. Our novel technique sheds light on how social desirability biases arise from deliberate misreporting and/or satisficing, and how these vary across modes

    Validity of Chatbot Use for Mental Health Assessment: Experimental Study

    Get PDF
    BACKGROUND: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may induce potential biases, such as socially desirable responding (SDR), and may require further effort to complete assessments. OBJECTIVE: This study aimed to investigate the convergent and discriminant validity of chatbots for mental health assessments, the effect of assessment mode on SDR, and the effort required by participants for assessments using chatbots compared with established modes. METHODS: In a counterbalanced within-subject design, we assessed 2 different constructs—psychological distress (Kessler Psychological Distress Scale and Brief Symptom Inventory-18) and problematic alcohol use (Alcohol Use Disorders Identification Test-3)—in 3 modes (chatbot, paper-and-pencil, and web-based), and examined convergent and discriminant validity. In addition, we investigated the effect of mode on SDR, controlling for perceived sensitivity of items and individuals’ tendency to respond in a socially desirable way, and we also assessed the perceived social presence of modes. Including a between-subject condition, we further investigated whether SDR is increased in chatbot assessments when applied in a self-report setting versus when human interaction may be expected. Finally, the effort (ie, complexity, difficulty, burden, and time) required to complete the assessments was investigated. RESULTS: A total of 146 young adults (mean age 24, SD 6.42 years; n=67, 45.9% female) were recruited from a research panel for laboratory experiments. The results revealed high positive correlations (all P<.001) of measures of the same construct across different modes, indicating the convergent validity of chatbot assessments. Furthermore, there were no correlations between the distinct constructs, indicating discriminant validity. Moreover, there were no differences in SDR between modes and whether human interaction was expected, although the perceived social presence of the chatbot mode was higher than that of the established modes (P<.001). Finally, greater effort (all P<.05) and more time were needed to complete chatbot assessments than for completing the established modes (P<.001). CONCLUSIONS: Our findings suggest that chatbots may yield valid results. Furthermore, an understanding of chatbot design trade-offs in terms of potential strengths (ie, increased social presence) and limitations (ie, increased effort) when assessing mental health were established

    Collecting Data from Children Ages 9-13

    Get PDF
    Provides a summary of literature on common methods used to collect data, such as diaries, interviews, observational methods, and surveys. Analyzes age group-specific considerations, advantages, and drawbacks, with tips for improving data quality

    A good mix? Mixed mode data collection and cross-national surveys

    Get PDF
    Can cross-national surveys benefit from mixed mode data collection? This article provides a classification of the different ways in which modes of data collection may be mixed within a cross-national survey, and investigates the methodological consequences of such designs. Mixed mode designs have the potential to lower survey costs relative to single-mode face-to-face surveys, while maintaining higher response rates than cheaper modes alone could. Yet since responses to survey questions are not always independent of the survey mode, mixed mode designs endanger cross-national measurement equivalence (as well as, in the case of time series surveys, diachronic equivalence), so that cross-national comparisons (and analyses of change over time) lose internal validity. These problems can be mitigated by careful questionnaire and survey design, but won’t be entirely overcome in many cases. The use of mixed mode designs in cross-national surveys therefore needs to be accompanied by methodological research to establish the likely consequences for measurement

    Validating Earnings in the German National Educational Panel Study: Determinants of Measurement Accuracy of Survey Questions on Earnings

    Get PDF
    Questions on earnings are counted among sensitive topics that often produce high rates of item nonresponse or measurement error. Both types of bias are well documented in the literature and are found to concentrate in the tails of the earnings distribution. In this paper, we explore whether measurement error on earnings could be explained by socially desirable reporting and whether the error is impacted by interviewer characteristics. Using the linked dataset NEPS-SC6-ADIAB, which contains survey data from the German National Educational Panel Study, Starting Cohort "Adults", linked to administrative earnings records from the German Federal Employment Agency, we analyze the extents of over- and underreporting and the influence of respondent and interviewer characteristics on these behaviors for different quartiles of the earnings distribution. Our results show that the average level of misreporting is relatively low (approximately 6% of median earnings). Our main logistic model reveals that female and more highly educated respondents report significantly more accurately while those with higher earnings misreport to a significantly greater extent. Regarding the impact of personality traits on reporting accuracy, we find significant positive effects for more agreeable respondents and significant negative effects for extraverted respondents. When differentiating by the direction of misreporting, we find, for instance, that women are less likely to overreport across all earnings quartiles. However, the influence of interviewer characteristics is negligible

    Access Grid Nodes in Field Research

    Get PDF
    This article reports fieldwork with an Access Grid Node ('AGN') device, analogous to video teleconferencing but based on grid computational technology. The device enables research respondents to be interviewed at remote sites, with potential savings in travelling to conduct fieldwork. Practical, methodological and analytic aspects of the experimental fieldwork are reported. Findings include some distinctive features of AGN interviews relative to co-present interviews; overall, there were some benefits and some disadvantages to communication. The article concludes that this new research interview mode shows potential, particularly once the difficulties associated with a new research technology are resolved.Social Research Methods, Interview Methods, New Technologies for Social Research, Access Grid Nodes, Interview Communication, Witnesses at Court
    corecore