101 research outputs found

    Chapter 20: What do interviewers learn? Changes in interview length and interviewer behaviors over the field period. Appendix 20

    Get PDF
    Appendix 20A Full Model Coefficients and Standard Errors Predicting Count of Questions with Individual Interviewer Behaviors, Two-level Multilevel Poisson Models with Number of Questions Asked as Exposure Variable, WLT1 and WLT2 Analytic strategyTable A20A.1 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Exact Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.2 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Nondirective Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.3 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Adequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.4 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Clarification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.5 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.6 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Stuttering During Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.7 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Disfluencies with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.8 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Pleasant Talk with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.9 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Any Task-Related Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.10 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Laughter with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.11 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Minor Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.12 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Major Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.13 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Directive Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.14 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Inadequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.15 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Interruptions with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Appendix 20B Full Model Coefficients and Standard Errors Predicting Interview Length with Sets of Interviewer Behaviors, Two-level Multilevel Linear Models, WLT1 and WLT2 Table A20B.1 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, No Interviewer Behaviors, WLT1 and WLT2 Table A20B.2 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Standardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.3 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Inefficiency Interviewer Behaviors, WLT1 and WLT2 Table A20B.4 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Nonstandardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.5 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including All Interviewer Behaviors, WLT1 and WLT2 Appendix 20C Mediation Models for Each Individual Interviewer Behavior Table A20C.1 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today 1 Table A20C.2 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today

    What Do Interviewers Learn? Changes in Interview Length and Interviewer Behaviors Over the Field Period

    Get PDF
    Interviewers systematically speed up over the field period of a survey as they conduct interviews (Olson and Peytchev 2007; Olson and Bilgen 2011; Kirchner and Olson 2017). Competing hypotheses for this increase in speed is that interviewers learn from previous interviews, changing their behaviors accordingly, or that they change behaviors in response to who the respondent is, including both respondent’s fixed characteristics and their response propensity. Previous work (e.g., Kirchner and Olson 2017) has failed to completely explain this learning effect, even after accounting for a wide range of measures of each of these hypotheses. However, prior work has not examined how actual behaviors during an interview are related to interview length and whether different interview lengths can be explained by different types of interviewer behaviors. This paper uses data from two telephone surveys with extensive information on interviewer behaviors to attempt to explain the within-survey interviewer experience effect. Results indicate that interviewer behaviors do change over the field period (e.g., reductions in inefficient interviewer behaviors) and that interviewer behaviors are related to interviewer length, but that interviewer behaviors do not fully explain decreases in interview length over the field period. Implications for future research and for practice are provided

    The Effects of Mismatches Between Survey Question Stems and Response Options on Data Quality and Responses

    Get PDF
    Several questionnaire design texts emphasize a dual role of question wording: the wording needs to express what is being measured and tell respondents how to answer. Researchers tend to focus heavily on the first of these goals, but sometimes overlook the second, resulting in question wording that does not match the response options provided (i.e., mismatches). Common examples are yes/no questions with ordinal or nominal response options, open-ended questions with closed-ended response options, and check-all-that apply questions with forced-choice response options. A slightly different type of mismatch utilizes a question stem that can be read as asking for two different types of answers with no indication of which type should be provided. In this paper, we report the results of twenty-two experimental comparisons of data quality indicators (i.e., item nonresponse and response time) and response distributions across matched and mismatched versions of questions from a postal mail survey and a telephone survey. We find that mismatched items generally have lower data quality than matched items and that substantive results differ significantly across matched and mismatched designs, especially in the telephone survey. The results suggest that researchers should be wary of mismatches and should strive for holistic design. Supplemental data included; .docx version attached below

    Within-Household Selection in Mail Surveys: Explicit Questions Are Better Than Cover Letter Instructions

    Get PDF
    Randomly selecting a single adult within a household is one of the biggest challenges facing mail surveys. Yet obtaining a probability sample of adults within households is critical to having a probability sample of the US adult population. In this paper, we experimentally test three alternative placements of the within-household selection instructions in the National Health, Wellbeing, and Perspectives study (sample n = 6,000; respondent n = 998): (1) a standard cover letter informing the household to ask the person with the next birthday to complete the survey (control); (2) the control cover letter plus an instruction on the front cover of the questionnaire itself to have the adult with the next birthday complete the survey; and (3) the control cover letter plus an explicit yes/ no question asking whether the individual is the adult in the household who will have the next birthday. Although the version with an explicit question had a two-point decrease in response rates relative to not having any instruction, the explicit question significantly improves selection accuracy relative to the other two designs, yields a sample composition closer to national benchmarks, and does not affect item nonresponse rates. Accurately selected respondents also differ from inaccurately selected respondents on questions related to household tasks. Survey practitioners are encouraged to use active tasks such as explicit questions rather than passive tasks such as embedded instructions as part of the within-household selection process. Supplementary data (spreadsheet) is attached below

    A Comparison of Fully Labeled and Top-Labeled Grid Question Formats

    Get PDF
    The grid question format is common in mail and web surveys. In this format, a single question stem introduces a set of items, which are listed in rows of a table underneath the question stem. The table’s columns contain the response options, usually only listed at the top, with answer spaces arrayed below and aligned with the items (Dillman et al. 2014).This format is efficient for respondents; they do not have to read the full question stem and full set of response options for every item in the grid. Likewise, it is space efficient for the survey researcher, which reduces printing and shipping costs in mail surveys and scrolling in web surveys. However, grids also complicate the response task by introducing fairly complex groupings of information. To answer grid items, respondents have to connect disparate pieces of information in space by locating the position on the page or screen where the proper row (the item prompt) intersects with the proper column (the response option). The difficulty of this task increases when the respondent has to traverse the largest distances to connect items to response option labels (down and right in the grid) (Couper 2008; Kaczmirek 2011).This spatial connection task has to be conducted while remembering the shared question stem, perhaps after reading and answering multiple items. As a result, grid items are prone to high rates of item nonresponse, straightlining, and breakoffs (Couper et al. 2013; Tourangeau et al. 2004). One way to possibly ease the burdens of grids in mail surveys is to repeat the response option labels in each row next to their corresponding answer spaces (Dillman 1978). Including response option labels near the answer spaces eliminates the need for vertical processing, allowing respondents to focus only on processing horizontally. However, fully labeling the answer spaces yields a more busy, dense display overall, which one can speculate might intimidate or overwhelm some respondents, leading them to skip the grid entirely. In this chapter we report the results of a series of experimental comparisons of fully labeled versus top-labeled grid formats from national probability mail survey, a convenience sample of students in a paper-and-pencil survey, and a convenience sample in a web-based eye-tracking laboratory study. For each experiment we compare mean responses, inter-item correlations, item nonresponse rates, and straightlining. In addition, for the eye-tracking experiment we also examine whether the different grid designs impacted how respondents visually processed the grid items. For two of the experiments, we conduct subgroup analyses to assess whether the effects of the grids differed for high and low cognitive ability respondents. Our experiments are conducted using both attitude and behavior questions covering a wide variety of question topics and using a variety of types of response scales

    The effect of emphasis in telephone survey questions on survey measurement quality

    Get PDF
    Questionnaire design texts commonly recommend emphasizing important words, including capitalization or underlining, to promote their processing by the respondent. In self-administered surveys, respondents can see the emphasis, but in an interviewer-administered survey, emphasis has to be communicated to respondents through audible signals. We report the results of experiments in two US telephone surveys in which telephone survey questions were presented to interviewers either with or without emphasis. We examine whether emphasis changes substantive answers to survey questions, whether interviewers actually engage in verbal emphasis behaviors, and whether emphasis changes the interviewer- respondent interaction. We find surprisingly little effect of the question emphasis on any outcome, with the primary effects on vocal intonation and the interviewer-respondent interaction. Thus, there is no evidence here to suggest that questionnaire designers should use emphasis in interviewer-administered questionnaires to improve data quality. As the first study on this topic, we suggest many opportunities for future research

    Women’s Work? The Relationship between Farmwork and Gender Self-Perception

    Get PDF
    Women have long been involved in agricultural production, yet farming and ranching have been associated with masculinity and men. In recent years women have become more involved and more likely to take active and equal roles on farms and ranches and thus increasingly are doing tasks that have been associated with masculinity. Prior work indicates that women are perceived by others as more masculine when they do these tasks, but less work has focused on the association between women’s involvement in farming and women’s own perceptions of their gender (i.e., how masculine or feminine they feel). Using 2006 survey data from a random sample of women in livestock and grain operations in Washington State, we find that women’s involvement in farm and ranch tasks is associated with their gender self- perception, with more involvement being associated with a more masculine self-perception. Women who view their primary role as independent agricultural producers or full partners also perceive themselves as more masculine than women who view their primary role as homemaker. We discuss the implications of these findings for women’s experiences in agriculture

    Design Effects in the Transition to Web-Based Surveys

    Get PDF
    Innovation within survey modes should always be mitigated by concerns about survey quality and in particular sampling, coverage, nonresponse, and measurement error. This is as true today with the development of web surveying as it was in the 1970s when telephone surveying was being developed. This paper focuses on measurement error in web surveys. Although Internet technology provides significant opportunities for innovation in survey design, systematic research has yet to be conducted on how most of the possible innovations might affect measurement error, leaving many survey designers “out in the cold.” This paper summarizes recent research to provide an overview of how choosing the web mode affects the asking and answering of questions. It starts with examples of how question formats used in other survey modes perform differently in the web mode. It then provides examples of how the visual design of web surveys can influence answers in unexpected ways and how researchers can strategically use visual design to get respondents to provide their answers in a desired format. Finally, the paper concludes with suggested guidelines for web survey design

    Why Do Cell Phone Interviews Last Longer? A Behavior Coding Perspective

    Get PDF
    Why do telephone interviews last longer on cell phones than landline phones? Common explanations for this phenomenon include differential selection into subsets of questions, activities outside the question-answer sequence (such as collecting contact information for cell-minute reimbursement), respondent characteristics, behaviors indicating disruption to respondents’ perception and comprehension, and behaviors indicating interviewer reactions to disruption. We find that the time difference persists even when we focus only on the question-answer portion of the interview and only on shared questions (i.e., eliminating the first two explanations above). To learn why the difference persists, we use behavior codes from the U.S./Japan Newspaper Opinion Poll, a dual-frame telephone survey of US adults, to examine indicators of satisficing, line-quality issues, and distraction. Overall, we find that respondents on cell phones are more disrupted, and that the difference in interview duration occurs because cell phone respondents take longer to provide acceptable answers. Interviewers also slow their speed of speech when asking questions. A slower speaking rate from both actors results in a longer and more expensive interview when respondents use cell phones. Includes Supplementary Data
    • …
    corecore