159 research outputs found

    Words, Numbers and Visual Heuristics in Web Surveys: Is there a Hierarchy of Importance?

    Get PDF
    In interpreting questions, respondents extract meaning from how the information in a questionnaire is shaped, spaced, and shaded. This makes it important to pay close attention to the arrangement of visual information on a questionnaire. Respondents follow simple heuristics in interpreting the visual features of questions. We carried out five experiments to investigate how the effect of visual heuristics affected the answers to survey questions. We varied verbal, numerical, and other visual cues such as color. In some instances the use of words helps overcome visual layout effects. In at least one instance, a fundamental difference in visual layout (violating the 'left and top means first' heuristic) influenced answers on top of word labels. This suggests that both visual and verbal languages are important. Yet sometimes one can override the other. To reduce the effect of visual cues, it is better to use fully labeled scales in survey questions.questionnaire design;layout;visual language;response effects;visual cues

    A closer look at web questionnaire design

    Get PDF

    A Closer Look at Web Questionnaire Design.

    Get PDF
    This dissertation deals with the design of online questionnaires and its consequences for data quality: what is the effect of the number of items placed on a screen, the response categories, layout choices et cetera. It also focuses on attrition and panel conditioning: what do people learn from taking surveys both in content and in the response process. In short: A Closer Look at Web Questionnaire Design.

    Design of Web Questionnaires: The Effect of Layout in Rating Scales

    Get PDF
    This article shows that respondents gain meaning from visual cues in a web survey as well as from verbal cues (words).We manipulated the layout of a five point rating scale using verbal, graphical, numerical, and symbolic language. This paper extends the existing literature in four directions: (1) all languages (verbal, graphical, numeric, and symbolic) are individually manipulated on the same rating scale, (2) a heterogeneous sample is used, (3) in which way personal characteristics and a respondent's need to think and evaluate account for variance in survey responding is analyzed, and (4) a web survey is used.Our experiments show differences due to verbal and graphical language but no effects of numeric or symbolic language are found.Respondents with a high need for cognition and a high need to evaluate are affected more by layout than respondents with a low need to think or evaluate.Furthermore, men, the elderly, and the highly educated are the most sensible for layout effects.web survey;questionnaire lay out;context effects;need for cognition;need to evaluate

    Relating Question Type to Panel Conditioning: A Comparison between Trained and Fresh Respondents

    Get PDF
    Panel conditioning arises if respondents are influenced by participation in previous surveys, such that their answers differ significantly from the answers of individuals who are interviewed for the first time. Having two panels—a trained one and a completely fresh one—created a unique opportunity for analysing panel conditioning effects. To determine which type of question is sensitive to panel conditioning, 981 trained respondents and 2809 fresh respondents answered nine questions with different question types. The results in this paper show that panel conditioning only arise in knowledge questions. Questions on attitudes, actual behaviour, or facts were not sensitive to panel conditioning. Panel conditioning in knowledge questions was restricted to less-known subjects (more difficult questions), suggesting a relation between panel conditioning and cognition.panel conditioning;re-interviewing;measurement error;panel surveys

    Design Effects in Web Surveys: Comparing Trained and Fresh Respondents

    Get PDF
    In this paper we investigate whether there are differences in design effects between trained and fresh respondents. In three experiments, we varied the number of items on a screen, the choice of response categories, and the layout of a five point rating scale. We find that trained respondents are more sensitive to satisficing and select the first acceptable response option more often than fresh respondents. Fresh respondents show stronger effects with regard to verbal and nonverbal cues than trained respondents, suggesting that fresh respondents find it more difficult to answer questions and pay more attention to the details of the response scale in interpreting the question.professional respondents;questionnaire design;items per screen;response categories;layout

    Can I use a Panel? Panel Conditioning and Attrition Bias in Panel Surveys

    Get PDF
    Over the past decades there has been an increasing use of panel surveys at the household or individual level, instead of using independent cross-sections. Panel data have important advantages, but there are also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias can arise if respondents drop out of the panel non-randomly, i.e., when attrition is correlated to a variable of interest. Panel conditioning arises if responses in one wave are in°uenced by participation in the previous wave(s). The experience of the previous interview(s) may affect the answers of respondents in a next interview on the same topic, such that their answers differ systematically from the answers of individuals who are interviewed for the first time. The literature has mainly focused on estimating attrition bias; less is known on panel conditioning effects. In this study we discuss how to disentangle the total bias in panel surveys due to attrition and panel conditioning into a panel conditioning and an attrition effect, and develop a test for panel conditioning allowing for non-random attrition. First, we consider a fully nonparametric approach without any assumptions other than those on the sample design, leading to interval identification of the measures for the attrition and panel conditioning effect. Second, we analyze the proposed measures under additional assumptions concerning the attrition process, making it possible to obtain point estimates and standard errors for both the attrition bias and the panel conditioning effect. We illustrate our method on a variety of questions from two-wave surveys conducted in a Dutch household panel. We found a significant bias due to panel conditioning in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. Point estimates of the panel conditioning effect do not vary a lot with the different assumptions on the attrition process.panel conditioning;attrition bias;measurement error;panel surveys

    Words, Numbers and Visual Heuristics in Web Surveys:Is there a Hierarchy of Importance?

    Get PDF
    corecore