53 research outputs found

    Design Effects in the Transition to Web-Based Surveys

    Get PDF
    Innovation within survey modes should always be mitigated by concerns about survey quality and in particular sampling, coverage, nonresponse, and measurement error. This is as true today with the development of web surveying as it was in the 1970s when telephone surveying was being developed. This paper focuses on measurement error in web surveys. Although Internet technology provides significant opportunities for innovation in survey design, systematic research has yet to be conducted on how most of the possible innovations might affect measurement error, leaving many survey designers “out in the cold.” This paper summarizes recent research to provide an overview of how choosing the web mode affects the asking and answering of questions. It starts with examples of how question formats used in other survey modes perform differently in the web mode. It then provides examples of how the visual design of web surveys can influence answers in unexpected ways and how researchers can strategically use visual design to get respondents to provide their answers in a desired format. Finally, the paper concludes with suggested guidelines for web survey design

    Your Best Estimate is Fine. Or is It?

    Get PDF
    Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys

    “Are You …”: An Examination of Incomplete Question Stems in Self-administered Surveys

    Get PDF
    Questionnaire designers are encouraged to write questions as complete sentences. In self-administered surveys, incomplete question stems may reduce visual clutter but may also increase burden when respondents need to scan the response options to fully complete the question. We experimentally examine the effects of three categories of incomplete question stems (incomplete conversational, incomplete ordinal, and incomplete nominal questions) versus complete question stems on 53 items in a probability webmail survey. We examine item nonresponse, response time, selection of the first and last response options, and response distributions. We find that incomplete question stems take slightly longer to answer and slightly reduce the selection of the last response option but have no effect on item nonresponse rates or selection of the first response option. We conclude that questionnaire designers should follow current best practices to write complete questions, but deviations from complete questions will likely have limited effects. Includes Supplementary materials

    Drawing on LGB Identity to Encourage Participation and Disclosure of Sexual Orientation in Surveys

    Get PDF
    This paper reports an experiment that tested how three survey cover designs—images of traditional families and individuals displaying themselves in typical gender ways; images of lesbian, gay, and bisexual (LGB) and heterosexual individuals and families; and no cover images—affected LGB people’s participation and disclosure of LGB identity and non-LGB people’s participation. Analyses showed the LGB-inclusive cover led to significantly more LGB respondents than the other designs, without significantly affecting the demographic, political, and religious makeup of the completed sample. We discuss what these findings mean for addressing two challenges: getting LGB people to respond to surveys and to disclose their LGB identity

    The self-assessed literacy index: Reliability and validity

    Get PDF
    Literacy is associated with many outcomes of research interest as well as with respondents’ ability to even participate in surveys, yet very few surveys attempt to measure it because doing so is often complex, requiring extensive tests. The central goal of this paper is to develop a parsimonious measure of respondents’ reading ability that does not require a complex literacy test. We use data from the 2003 National Assessment of Adult Literacy to identify correlates of reading ability to form a literacy index. These correlates include self-assessments of one’s ability to understand, read and write English, and literacy practices at home. Our literacy index reliably discerns literacy test scores above educational attainment, and the index shows high internal consistency (coefficient alpha = 0.78) and validity. The paper concludes with implications of these findings for survey research practitioners and suggestions for future research

    The self-assessed literacy index: Reliability and validity

    Get PDF
    Literacy is associated with many outcomes of research interest as well as with respondents’ ability to even participate in surveys, yet very few surveys attempt to measure it because doing so is often complex, requiring extensive tests. The central goal of this paper is to develop a parsimonious measure of respondents’ reading ability that does not require a complex literacy test. We use data from the 2003 National Assessment of Adult Literacy to identify correlates of reading ability to form a literacy index. These correlates include self-assessments of one’s ability to understand, read and write English, and literacy practices at home. Our literacy index reliably discerns literacy test scores above educational attainment, and the index shows high internal consistency (coefficient alpha = 0.78) and validity. The paper concludes with implications of these findings for survey research practitioners and suggestions for future research

    Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?

    Get PDF
    Previous research has revealed techniques to improve response quality in open-ended questions in both paper and interviewer-administered survey modes. The purpose of this paper is to test the effectiveness of similar techniques in web surveys. Using data from a series of three random sample web surveys of Washington State University undergraduates, we examine the effects of visual and verbal answer-box manipulations (i.e., altering the size of the answer box and including an explanation that answers could exceed the size of the box) and the inclusion of clarifying and motivating introductions in the question stem. We gauge response quality by the amount and type of information contained in responses as well as response time and item nonresponse. The results indicate that increasing the size of the answer box has little effect on early responders to the survey but substantially improved response quality among late responders. Including any sort of explanation or introduction that made response quality and length salient also improved response quality for both early and late responders. In addition to discussing these techniques, we also address the potential of the web survey mode to revitalize the use of open-ended questions in self-administered surveys

    Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?

    Get PDF
    Previous research has revealed techniques to improve response quality in open-ended questions in both paper and interviewer-administered survey modes. The purpose of this paper is to test the effectiveness of similar techniques in web surveys. Using data from a series of three random sample web surveys of Washington State University undergraduates, we examine the effects of visual and verbal answer-box manipulations (i.e., altering the size of the answer box and including an explanation that answers could exceed the size of the box) and the inclusion of clarifying and motivating introductions in the question stem. We gauge response quality by the amount and type of information contained in responses as well as response time and item nonresponse. The results indicate that increasing the size of the answer box has little effect on early responders to the survey but substantially improved response quality among late responders. Including any sort of explanation or introduction that made response quality and length salient also improved response quality for both early and late responders. In addition to discussing these techniques, we also address the potential of the web survey mode to revitalize the use of open-ended questions in self-administered surveys

    Effects of Stem and Response Order on Response Patterns in Satisfaction Ratings

    Get PDF
    Considerable research has examined the effect of response option order in ordinal bipolar questions such as satisfaction questions. However, no research we know of has examined the effect of the order of presentation of concepts in the question stem or whether stem order moderates response option order. In this article, we experimentally test the main and interaction effects of both stem and response option order for items in self-administered surveys on response distributions and answer changes in eight satisfied/dissatisfied questions. We find consistent evidence that response option order impacts answers. We also find that the order of “satisfied” or “dissatisfied” in the question stem impacts response distributions for four of our eight items but does not moderate the effect of response option order. We discuss the implications of our findings for questionnaire design and secondary data analyses

    Comparing Check-All and Forced-Choice Question Formats in Web Surveys

    Get PDF
    For survey researchers, it is common practice to use the check-all question format in Web and mail surveys but to convert to the forced-choice question format in telephone surveys. The assumption underlying this practice is that respondents will answer the two formats similarly. In this research note we report results from 16 experimental comparisons in two Web surveys and a paper survey conducted in 2002 and 2003 that test whether the check-all and forced-choice formats produce similar results. In all 16 comparisons, we find that the two question formats do not perform similarly; respondents endorse more options and take longer to answer in the forced-choice format than in the check-all format. These findings suggest that the forced-choice question format encourages deeper processing of response options and, as such, is preferable to the check-all format, which may encourage a weak satisficing response strategy. Additional analyses show that neither acquiescence bias nor item nonresponse seem to pose substantial problems for use of the forced-choice question format in Web surveys
    • …
    corecore