17 research outputs found

    Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?

    Get PDF
    Previous research has revealed techniques to improve response quality in open-ended questions in both paper and interviewer-administered survey modes. The purpose of this paper is to test the effectiveness of similar techniques in web surveys. Using data from a series of three random sample web surveys of Washington State University undergraduates, we examine the effects of visual and verbal answer-box manipulations (i.e., altering the size of the answer box and including an explanation that answers could exceed the size of the box) and the inclusion of clarifying and motivating introductions in the question stem. We gauge response quality by the amount and type of information contained in responses as well as response time and item nonresponse. The results indicate that increasing the size of the answer box has little effect on early responders to the survey but substantially improved response quality among late responders. Including any sort of explanation or introduction that made response quality and length salient also improved response quality for both early and late responders. In addition to discussing these techniques, we also address the potential of the web survey mode to revitalize the use of open-ended questions in self-administered surveys

    Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?

    Get PDF
    Previous research has revealed techniques to improve response quality in open-ended questions in both paper and interviewer-administered survey modes. The purpose of this paper is to test the effectiveness of similar techniques in web surveys. Using data from a series of three random sample web surveys of Washington State University undergraduates, we examine the effects of visual and verbal answer-box manipulations (i.e., altering the size of the answer box and including an explanation that answers could exceed the size of the box) and the inclusion of clarifying and motivating introductions in the question stem. We gauge response quality by the amount and type of information contained in responses as well as response time and item nonresponse. The results indicate that increasing the size of the answer box has little effect on early responders to the survey but substantially improved response quality among late responders. Including any sort of explanation or introduction that made response quality and length salient also improved response quality for both early and late responders. In addition to discussing these techniques, we also address the potential of the web survey mode to revitalize the use of open-ended questions in self-administered surveys

    Comparing Check-All and Forced-Choice Question Formats in Web Surveys

    Get PDF
    For survey researchers, it is common practice to use the check-all question format in Web and mail surveys but to convert to the forced-choice question format in telephone surveys. The assumption underlying this practice is that respondents will answer the two formats similarly. In this research note we report results from 16 experimental comparisons in two Web surveys and a paper survey conducted in 2002 and 2003 that test whether the check-all and forced-choice formats produce similar results. In all 16 comparisons, we find that the two question formats do not perform similarly; respondents endorse more options and take longer to answer in the forced-choice format than in the check-all format. These findings suggest that the forced-choice question format encourages deeper processing of response options and, as such, is preferable to the check-all format, which may encourage a weak satisficing response strategy. Additional analyses show that neither acquiescence bias nor item nonresponse seem to pose substantial problems for use of the forced-choice question format in Web surveys

    The influence of visual layout on scalar questions in web surveys

    No full text
    Ordinal scale questions are frequently used by sociologists. This thesis examines how the visual presentation and layout of response choices influence answers to ordinal scale questions in web surveys. This research extends previous experimentation on paper surveys to this new visual survey mode of the Internet to determine whether the results of varying scale layouts are similar or different. Two sets of comparisons were included in a web survey of a random sample of Washington State University students during the Spring 2003 Semester; 1591 completed surveys were submitted from the 3004 requested obtaining a response rate of 53%. One experiment included a response scale with all categories labeled and compared a vertical linear layout using 4-5 categories to three nonlinear layouts where categories were double or triplebanked in columns across the page. A second set of experiments compared a 5-point fully labeled scale to a polar point scale where the verbal labels were removed for the middle three categories to an answer box format where respondents reported a number in an answer space. Multiple replications within each experiment indicate significant differences in the means between formats as well as significantly different response distributions and these findings tend to confirm previous findings on paper questionnaires and suggest that respondents to both paper and web surveys are similarly affected by visual layout and presentation (Christian and Dillman, In Press). Strong evidence now exists, that the visual presentation and layout of response scales influences respondent answers to self-administered questionnaires and needs to be considered when designing surveys that use ordinal scale questions. These construction differences appear to be important in helping surveyors understand why responses may vary across modes. The increasing use of mixed-mode surveys suggests the need to understand how the mode of communication, visual or aural, can influence how surveyors design questions and how respondents answer those questions

    Does “Yes or No” on the Telephone Mean the Same as “Check-All-That-Apply” on the Web?

    Get PDF
    Recent experimental research has shown that respondents to forced-choice questions endorse significantly more options than respondents to check-all questions. This research has challenged the common assumption that these two question formats can be used interchangeably but has been limited to comparisons within a single survey mode. In this paper we use data from a 2004 random sample survey of university students to compare the forced-choice and check-all question formats across web self-administered and telephone interviewer-administered surveys as they are commonly used in survey practice. We find that the within-mode question format effects revealed by previous research and reaffirmed in the current study appear to persist across modes as well; the telephone forced-choice format produces higher endorsement than the web check-all format. These results provide further support for the argument that the check-all and forced-choice question formats do not produce comparable results and are not interchangeable formats. Additional comparisons show that the forced-choice format performs similarly across telephone and web modes

    Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality?

    Get PDF
    Previous research has revealed techniques to improve response quality in open-ended questions in both paper and interviewer-administered survey modes. The purpose of this paper is to test the effectiveness of similar techniques in web surveys. Using data from a series of three random sample web surveys of Washington State University undergraduates, we examine the effects of visual and verbal answer-box manipulations (i.e., altering the size of the answer box and including an explanation that answers could exceed the size of the box) and the inclusion of clarifying and motivating introductions in the question stem. We gauge response quality by the amount and type of information contained in responses as well as response time and item nonresponse. The results indicate that increasing the size of the answer box has little effect on early responders to the survey but substantially improved response quality among late responders. Including any sort of explanation or introduction that made response quality and length salient also improved response quality for both early and late responders. In addition to discussing these techniques, we also address the potential of the web survey mode to revitalize the use of open-ended questions in self-administered surveys

    Comparing Check-All and Forced-Choice Question Formats in Web Surveys

    Get PDF
    For survey researchers, it is common practice to use the check-all question format in Web and mail surveys but to convert to the forced-choice question format in telephone surveys. The assumption underlying this practice is that respondents will answer the two formats similarly. In this research note we report results from 16 experimental comparisons in two Web surveys and a paper survey conducted in 2002 and 2003 that test whether the check-all and forced-choice formats produce similar results. In all 16 comparisons, we find that the two question formats do not perform similarly; respondents endorse more options and take longer to answer in the forced-choice format than in the check-all format. These findings suggest that the forced-choice question format encourages deeper processing of response options and, as such, is preferable to the check-all format, which may encourage a weak satisficing response strategy. Additional analyses show that neither acquiescence bias nor item nonresponse seem to pose substantial problems for use of the forced-choice question format in Web surveys

    Using the Internet to Survey Small Towns and Communities: Limitations and Possibilities in the Early 21st Century

    Get PDF
    Researchers who are interested in small towns and rural communities in the United States often find that they need to conduct their own sample surveys because many large national surveys, such as the American Community Survey, do not collect enough representative responses to make precise estimates. In collecting their own survey data, researchers face a number of challenges, such as sampling and coverage limitations. This article summarizes those challenges and tests mail and Internet methodologies for collecting data in small towns and rural communities using the U.S. Postal Service’s Delivery Sequence File as a sample frame. Findings indicate that the Delivery Sequence File can be used to sample households in rural locations by sending them invitations via postal mail to respond to either paper-and-pencil or Internet surveys. Although the mail methodology is quite successful, the results for the Internet suggest that Web surveys alone exclude potentially important segments of the population of small towns and rural communities. However, Web surveys supplemented with postal questionnaires produce results quite similar to those of mail-only surveys, representing a possible cost savings for researchers who have access to Web survey capabilities
    corecore