1,130 research outputs found

    Design Effects in Web Surveys: Comparing Trained and Fresh Respondents

    Get PDF
    In this paper we investigate whether there are differences in design effects between trained and fresh respondents. In three experiments, we varied the number of items on a screen, the choice of response categories, and the layout of a five point rating scale. We find that trained respondents are more sensitive to satisficing and select the first acceptable response option more often than fresh respondents. Fresh respondents show stronger effects with regard to verbal and nonverbal cues than trained respondents, suggesting that fresh respondents find it more difficult to answer questions and pay more attention to the details of the response scale in interpreting the question.professional respondents;questionnaire design;items per screen;response categories;layout

    Relating Question Type to Panel Conditioning: A Comparison between Trained and Fresh Respondents

    Get PDF
    Panel conditioning arises if respondents are influenced by participation in previous surveys, such that their answers differ significantly from the answers of individuals who are interviewed for the first time. Having two panels—a trained one and a completely fresh one—created a unique opportunity for analysing panel conditioning effects. To determine which type of question is sensitive to panel conditioning, 981 trained respondents and 2809 fresh respondents answered nine questions with different question types. The results in this paper show that panel conditioning only arise in knowledge questions. Questions on attitudes, actual behaviour, or facts were not sensitive to panel conditioning. Panel conditioning in knowledge questions was restricted to less-known subjects (more difficult questions), suggesting a relation between panel conditioning and cognition.panel conditioning;re-interviewing;measurement error;panel surveys

    Can I use a Panel? Panel Conditioning and Attrition Bias in Panel Surveys

    Get PDF
    Over the past decades there has been an increasing use of panel surveys at the household or individual level, instead of using independent cross-sections. Panel data have important advantages, but there are also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias can arise if respondents drop out of the panel non-randomly, i.e., when attrition is correlated to a variable of interest. Panel conditioning arises if responses in one wave are in°uenced by participation in the previous wave(s). The experience of the previous interview(s) may affect the answers of respondents in a next interview on the same topic, such that their answers differ systematically from the answers of individuals who are interviewed for the first time. The literature has mainly focused on estimating attrition bias; less is known on panel conditioning effects. In this study we discuss how to disentangle the total bias in panel surveys due to attrition and panel conditioning into a panel conditioning and an attrition effect, and develop a test for panel conditioning allowing for non-random attrition. First, we consider a fully nonparametric approach without any assumptions other than those on the sample design, leading to interval identification of the measures for the attrition and panel conditioning effect. Second, we analyze the proposed measures under additional assumptions concerning the attrition process, making it possible to obtain point estimates and standard errors for both the attrition bias and the panel conditioning effect. We illustrate our method on a variety of questions from two-wave surveys conducted in a Dutch household panel. We found a significant bias due to panel conditioning in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. Point estimates of the panel conditioning effect do not vary a lot with the different assumptions on the attrition process.panel conditioning;attrition bias;measurement error;panel surveys

    Design of Web Questionnaires: The Effect of Layout in Rating Scales

    Get PDF
    This article shows that respondents gain meaning from visual cues in a web survey as well as from verbal cues (words).We manipulated the layout of a five point rating scale using verbal, graphical, numerical, and symbolic language. This paper extends the existing literature in four directions: (1) all languages (verbal, graphical, numeric, and symbolic) are individually manipulated on the same rating scale, (2) a heterogeneous sample is used, (3) in which way personal characteristics and a respondent's need to think and evaluate account for variance in survey responding is analyzed, and (4) a web survey is used.Our experiments show differences due to verbal and graphical language but no effects of numeric or symbolic language are found.Respondents with a high need for cognition and a high need to evaluate are affected more by layout than respondents with a low need to think or evaluate.Furthermore, men, the elderly, and the highly educated are the most sensible for layout effects.web survey;questionnaire lay out;context effects;need for cognition;need to evaluate

    Design of Web Questionnaires: An Information Processing Perspective for the Effect of Response Categories

    Get PDF
    In this study we use an information-processing perspective to explore the impact of response scales on respondents answers in a web survey.This paper has four innovations compared to the existing literature: research is based on a different mode of administration (web), we use an open-ended format as a benchmark, four different question types are used, and the study is conducted on a representative sample of the population.We find strong effects of response scales.Questions requiring estimation strategies are more affected by the choice of response format than questions in which direct recall is used.Respondents with a low need for cognition and respondents with a low need to form opinions are more affected by the response categories than respondents with a high need for cognition and a high need to evaluate.The sensitivity to contextual clues is also significantly related to gender, age and educationweb survey;questionnaire design;measurement error;context effects;response categories;need for cognition;need to evaluate

    Relating Question Type to Panel Conditioning:A Comparison between Trained and Fresh Respondents

    Get PDF
    Panel conditioning arises if respondents are influenced by participation in previous surveys, such that their answers differ significantly from the answers of individuals who are interviewed for the first time. Having two panels—a trained one and a completely fresh one—created a unique opportunity for analysing panel conditioning effects. To determine which type of question is sensitive to panel conditioning, 981 trained respondents and 2809 fresh respondents answered nine questions with different question types. The results in this paper show that panel conditioning only arise in knowledge questions. Questions on attitudes, actual behaviour, or facts were not sensitive to panel conditioning. Panel conditioning in knowledge questions was restricted to less-known subjects (more difficult questions), suggesting a relation between panel conditioning and cognition.

    Can I use a Panel? Panel Conditioning and Attrition Bias in Panel Surveys

    Get PDF
    Over the past decades there has been an increasing use of panel surveys at the household or individual level, instead of using independent cross-sections. Panel data have important advantages, but there are also two potential drawbacks: attrition bias and panel conditioning effects. Attrition bias can arise if respondents drop out of the panel non-randomly, i.e., when attrition is correlated to a variable of interest. Panel conditioning arises if responses in one wave are in°uenced by participation in the previous wave(s). The experience of the previous interview(s) may affect the answers of respondents in a next interview on the same topic, such that their answers differ systematically from the answers of individuals who are interviewed for the first time. The literature has mainly focused on estimating attrition bias; less is known on panel conditioning effects. In this study we discuss how to disentangle the total bias in panel surveys due to attrition and panel conditioning into a panel conditioning and an attrition effect, and develop a test for panel conditioning allowing for non-random attrition. First, we consider a fully nonparametric approach without any assumptions other than those on the sample design, leading to interval identification of the measures for the attrition and panel conditioning effect. Second, we analyze the proposed measures under additional assumptions concerning the attrition process, making it possible to obtain point estimates and standard errors for both the attrition bias and the panel conditioning effect. We illustrate our method on a variety of questions from two-wave surveys conducted in a Dutch household panel. We found a significant bias due to panel conditioning in knowledge questions, but not in other types of questions. The examples show that the bounds can be informative if the attrition rate is not too high. Point estimates of the panel conditioning effect do not vary a lot with the different assumptions on the attrition process.

    Design of web questionnaires:The effect of layout in rating scales

    Get PDF
    This article shows that respondents gain meaning from verbal cues (words) as well as nonverbal cues (layout; numbers) in a web survey. We manipulated the layout of a five-point rating scale in two experiments. In the first experiment, we compared answers for different presentations of the responses: in one column with separate rows for each answer (“linear”), in three columns and two rows (“nonlinear”) in various orders, and after adding numerical labels to each response option. Our results show significant differences between a linear and nonlinear layout of response options. In the second experiment we looked at effects of verbal, graphical, and numerical language. We compared two linear vertical layouts with reverse orderings (from positive to negative and from negative to positive), a horizontal layout, and layouts with various numerical labels (1 to 5, 5 to 1, and 2 to 22). We found effects of verbal and graphical language. The effect of numerical language was only apparent when the numbers 2 to 22 were added to the verbal labels. We also examined whether the effects of design vary with personal characteristics. Elderly respondents appeared to be more sensitive to verbal, graphical, and numerical language
    • …
    corecore