473 research outputs found

    Chapter 20: What do interviewers learn? Changes in interview length and interviewer behaviors over the field period. Appendix 20

    Get PDF
    Appendix 20A Full Model Coefficients and Standard Errors Predicting Count of Questions with Individual Interviewer Behaviors, Two-level Multilevel Poisson Models with Number of Questions Asked as Exposure Variable, WLT1 and WLT2 Analytic strategyTable A20A.1 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Exact Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.2 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Nondirective Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.3 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Adequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.4 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Clarification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.5 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Appropriate Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.6 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Stuttering During Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.7 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Disfluencies with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.8 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Pleasant Talk with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.9 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Any Task-Related Feedback with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.10 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Laughter with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.11 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Minor Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.12 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Major Changes in Question Reading with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.13 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Directive Probes with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.14 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Inadequate Verification with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Table A20A.15 Coefficients and Standard Errors from Multilevel Poisson Regression Models Predicting Number of Questions with Interruptions with Total Number of Questions Asked to Each Respondent as an Exposure Variable, WLT1 and WLT2 Appendix 20B Full Model Coefficients and Standard Errors Predicting Interview Length with Sets of Interviewer Behaviors, Two-level Multilevel Linear Models, WLT1 and WLT2 Table A20B.1 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, No Interviewer Behaviors, WLT1 and WLT2 Table A20B.2 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Standardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.3 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Inefficiency Interviewer Behaviors, WLT1 and WLT2 Table A20B.4 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including Nonstandardized Interviewer Behaviors, WLT1 and WLT2 Table A20B.5 Coefficients and Standard Errors from Multilevel Linear Regression Models Predicting Total Duration, Including All Interviewer Behaviors, WLT1 and WLT2 Appendix 20C Mediation Models for Each Individual Interviewer Behavior Table A20C.1 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today 1 Table A20C.2 Indirect, Direct And Total Effect of each Interviewer Behavior on Interview Length through Interview Order, Work and Leisure Today

    Book Review: \u3ci\u3eImproving Survey Methods: Lessons from Recent Research\u3c/i\u3e, Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, and Patrick Sturgis, eds.

    Get PDF
    Improving Survey Methods: Lessons from Recent Research is a compilation of research by survey methodological leaders across Europe. The book is organized into eight sections – modes, interviewers, sensitive questions, web surveys, access panels, nonsurvey data collection, nonresponse, and missing data. Each section starts with a brief overview chapter followed by three or four (generally) empirical chapters. The chapters themselves vary in approach, with some being simple literature reviews, others reporting the results of a simple 2 £ 2 experiment, and still others conducting extensive observational analyses. This volume is a clear indication that survey methodological research is strong in Europe. As with all edited volumes, different chapters in this book have different audiences. Some are new empirical findings, while others are overviews of existing literature. The preface (pp. xi-xii) describes the volume as arising from a series of conferences. It is a valuable resource for researchers who were not able to attend these meetings, and for understanding some of the innovations occurring in methods in Germany and beyond

    Unpacking the black box of survey costs

    Get PDF
    Survey costs are a critically important input to and constraint on the quality of data collected from surveys. Much about survey costs is unknown, leading to lack of understanding of the drivers of survey costs, the relationship between survey costs and survey errors, and difficulty in justifying the importance of survey data versus other available administrative or organic data. This commentary outlines a recently developed typology for survey costs, illustrates this typology using methodological articles that report on costs in pharmacy surveys, and provides recommendations for research on the relationship between fixed and variable costs as a major area for further reporting and research, as well as the relationship between costs and errors

    Comments on “How Errors Cumulate: Two Examples” by Roger Tourangeau

    Get PDF
    This paper provides a discussion of the Tourangeau (2019) Morris Hansen Lecture paper. I address issues related to compounding errors in web surveys and the relationship between nonresponse and measurement errors. I provide a potential model for understanding when error sources in nonprobability web surveys may compound or counteract one other. I also provide three conceptual models that help explicate the joint relationship between nonresponse and measurement errors. Tourangeau’s paper provides two interesting case studies about the role of multiple error sources in survey data. The first case study is one in which errors occur at different stages of the representation process—errors first occur when creating a potential sample frame, then may be amplified when selecting sampled persons, possibly because of self-selection, and then are exacerbated with an individual’s decision to participate. The second case study has to do with situations where different error sources may influence each other and, in particular, the relationship between nonresponse error and various measurement error outcomes

    “I’m Helping to Put a Man on the Moon”: Communicating Higher Purpose in the Workplace

    Get PDF
    Higher purpose in one’s work can be defined as a driving force that extends beyond oneself that fulfills some larger need, goal, or hope and perhaps benefits others. This construct may have important implications for workplace motivation and engagement. A survey by Calling Brands (2012) found that 65% of workers would put in more effort for an organization with a higher purpose. Furthermore, a joint study by Net Impact and Rutgers University found that for 24% of the workforce and 45% of college students, “a job that seeks to make a social or environmental difference in” (Zukin & Szeltner, 2012, p. 12) or impact on the world – in other words, a job with higher purpose – would be worth a 15% pay cut. The same study also found that individuals working jobs where they felt this sense of higher purpose were twice as likely to be satisfied with their jobs. Thus, examining employee narratives about the higher purpose of their work may offer insight into how these individuals view their work, whether they are motivated by the higher purpose, or whether they find their jobs to be meaningful. These narratives may guide the individuals’ own thinking about the work they do. This study sought to gain increased knowledge about narratives of higher purpose in the workplace and to better understand how these narratives relate to motivation, supervisor communication of higher purpose, and organizational identification. The researcher collected narratives of higher purpose through an online questionnaire administered to 131 full-time working adults through use of an online system. These participants were contacted using a referral method in order to obtain a quota sample representative of the United States workforce based on U.S. Census occupational categories. A literature review led to four main research questions. Research question one concerned motivation and related themes: What motivators and subsequent themes are associated with work motivation? Research questions two through four concerned narratives of higher purpose: What themes exist in narratives of higher purpose? How are narratives of higher purpose different when superiors communicate about higher purpose? and What forms of identification exist in narratives of higher purpose? Preliminary analysis on the sample and conceptualizations of terms revealed that participants largely differentiated between motivation, purpose, higher purpose, inspiration, and calling. This helped to conceptualize the term higher purpose. The data indicated that one can feel inspired to work without higher purpose, highlighting a difference between these terms. Additionally, participants who reported a narrative of higher purpose were more likely to consider their work their calling, though reporting a higher purpose did not guarantee that one’s work was one’s calling; thus, calling was also differentiated from higher purpose. Thematic analysis of responses from open-ended survey questions revealed findings related to each of the four research questions. In regard to research question one, employees reported being motivated to work by both intrinsic and extrinsic factors, though intrinsic factors were more often elaborated upon and were also more often listed, especially for participants reporting narratives of higher purpose. The data also suggested a new categorization of motivators using the following division: intrinsic-internal/external and extrinsic-internal/external. Themes of narratives of higher purpose were identified in response to research question two and focused on a concern for benefiting others. However, the findings concerning supervisor communication of higher purpose, research question three, indicated that supervisor communication may have little to no influence on the content of individuals’ narratives of higher purpose. Additionally, the findings concerning organizational identification, research question four, were tentative, but they indicated that most participants holding narratives of higher purpose did not evidence organizational identification. These findings offer further conceptualization of a term that as yet hardly appears in the academic literature. Higher purpose was differentiated from other, similar terms, allowing it to emerge as a distinct construct meriting future research. Importantly, themes of higher purpose were revealed and analyzed, giving further nuance to the construct, which offers practical implications for employers hoping to create workplace engagement initiatives utilizing higher purpose. Another contribution of this work concerns the analysis of work motivation, which suggested an expansion of the standard division of intrinsic and extrinsic motivation. This division may yield more precise findings in analysis of future research on work motivation. This study provided insight into what motivators and subsequent themes are associated with work motivation, what themes exist regarding narratives of higher purpose, what influence supervisor communication has on the content of these narratives, and what forms of identification are present in these narratives. This area offers ample room for related research with potential to impact both employees and employers through practical application

    An Analysis of Interviewer Travel and Field Outcomes in Two Field Surveys

    Get PDF
    In this article, we investigate the relationship between interviewer travel behavior and field outcomes, such as contact rates, response rates, and contact attempts in two studies, the National Survey of Family Growth and the Health and Retirement Study. Using call record paradata that have been aggregated to interviewer-day levels, we examine two important cost drivers as measures of interviewer travel behavior: the distance that interviewers travel to segments and the number of segments visited on an interviewer-day. We explore several predictors of these measures of travel – the geographic size of the sampled areas, measures of urbanicity, and other sample and interviewer characteristics. We also explore the relationship between travel and field outcomes, such as the number of contact attempts made and response rates.Wefind that the number of segments that are visited on each interviewer-day has a strong association with field outcomes, but the number of miles travelled does not. These findings suggest that survey organizations should routinely monitor the number of segments that interviewers visit, and that more direct measurement of interviewer travel behavior is needed

    Examining Changes of Interview Length Over the Course of the Field Period

    Get PDF
    It is well established that interviewers learn behaviors both during training and on the job. How this learning occurs has received surprisingly little empirical attention: Is it driven by the interviewer herself or by the respondents she interviews? There are two competing hypotheses about what happens during field data collection: (1) interviewers learn behaviors from their previous interviews, and thus change their behavior in reaction to the behaviors previously encountered; and (2) interviewers encounter different types of and, especially, less cooperative respondents (i.e., nonresponse propensity affecting the measurement error situation), leading to changes in interview behaviors over the course of the field period. We refer to these hypotheses as the experience and response propensity hypotheses, respectively. This paper examines the relationship between proxy indicators for the experience and response propensity hypotheses on interview length using data and paradata from two telephone surveys. Our results indicate that both interviewer-driven experience and respondent-driven response propensity are associated with the length of interview. While general interviewing experience is nonsignificant, within-study experience decreases interview length significantly, even when accounting for changes in sample composition. Interviewers with higher cooperation rates have significantly shorter interviews in study one; however, this effect is mediated by the number of words spoken by the interviewer. We find that older respondents and male respondents have longer interviews despite controlling for the number of words spoken, as do respondents who complete the survey at first contact. Not surprisingly, interviews are significantly longer the more words interviewers and respondents speak

    A framework for anti-racist information literacy instruction: exemplar, process, and structure

    Get PDF
    Are instructional librarians having needed conversations with patrons about how research can perpetuate systemic discrimination and racism? A framework developed collaboratively between UND librarians and focused on exemplar, process, and structure provides a starting point. Learn how you can interrogate the conceptual processes and information architecture behind academic knowledge dissemination systems in order to foster a more anti-racist, equitable, and critical form of information literacy.https://commons.und.edu/cfl-lpp/1018/thumbnail.jp

    Examining Changes of Interview Length Over the Course of the Field Period

    Get PDF
    It is well established that interviewers learn behaviors both during training and on the job. How this learning occurs has received surprisingly little empirical attention: Is it driven by the interviewer herself or by the respondents she interviews? There are two competing hypotheses about what happens during field data collection: (1) interviewers learn behaviors from their previous interviews, and thus change their behavior in reaction to the behaviors previously encountered; and (2) interviewers encounter different types of and, especially, less cooperative respondents (i.e., nonresponse propensity affecting the measurement error situation), leading to changes in interview behaviors over the course of the field period. We refer to these hypotheses as the experience and response propensity hypotheses, respectively. This paper examines the relationship between proxy indicators for the experience and response propensity hypotheses on interview length using data and paradata from two telephone surveys. Our results indicate that both interviewer-driven experience and respondent-driven response propensity are associated with the length of interview. While general interviewing experience is nonsignificant, within-study experience decreases interview length significantly, even when accounting for changes in sample composition. Interviewers with higher cooperation rates have significantly shorter interviews in study one; however, this effect is mediated by the number of words spoken by the interviewer. We find that older respondents and male respondents have longer interviews despite controlling for the number of words spoken, as do respondents who complete the survey at first contact. Not surprisingly, interviews are significantly longer the more words interviewers and respondents speak

    An Analysis of Interviewer Travel and Field Outcomes in Two Field Surveys

    Get PDF
    In this article, we investigate the relationship between interviewer travel behavior and field outcomes, such as contact rates, response rates, and contact attempts in two studies, the National Survey of Family Growth and the Health and Retirement Study. Using call record paradata that have been aggregated to interviewer-day levels, we examine two important cost drivers as measures of interviewer travel behavior: the distance that interviewers travel to segments and the number of segments visited on an interviewer-day. We explore several predictors of these measures of travel – the geographic size of the sampled areas, measures of urbanicity, and other sample and interviewer characteristics. We also explore the relationship between travel and field outcomes, such as the number of contact attempts made and response rates.Wefind that the number of segments that are visited on each interviewer-day has a strong association with field outcomes, but the number of miles travelled does not. These findings suggest that survey organizations should routinely monitor the number of segments that interviewers visit, and that more direct measurement of interviewer travel behavior is needed
    • …
    corecore