21,654 research outputs found
Words, Numbers and Visual Heuristics in Web Surveys: Is there a Hierarchy of Importance?
In interpreting questions, respondents extract meaning from how the information in a questionnaire is shaped, spaced, and shaded. This makes it important to pay close attention to the arrangement of visual information on a questionnaire. Respondents follow simple heuristics in interpreting the visual features of questions. We carried out five experiments to investigate how the effect of visual heuristics affected the answers to survey questions. We varied verbal, numerical, and other visual cues such as color. In some instances the use of words helps overcome visual layout effects. In at least one instance, a fundamental difference in visual layout (violating the 'left and top means first' heuristic) influenced answers on top of word labels. This suggests that both visual and verbal languages are important. Yet sometimes one can override the other. To reduce the effect of visual cues, it is better to use fully labeled scales in survey questions.questionnaire design;layout;visual language;response effects;visual cues
Surveying the Left-Right Dimension: The Choice of a Response Format
Although left-right items are a standard tool of public opinion research, there remains some difference of opinion on the optimal response format. Two disputes can be identified in the literature: (a) whether to provide respondents with a small or large number of answer categories and (b) whether or not to administer the response scale including a midpoint. This study evaluates the performance of the 101-, 11- and 10-point left-right scales. These scales not only speak to the two disputed aspects of measuring the left-right dimension but are also common instruments in public opinion research. Drawing on data from a split ballot multitrait multi-method experiment carried out in a methodological pretest to the German Socio-Economic Panel (SOEP), the analysis shows that the choice of a response format makes a difference in terms of data quality: The 101- and 10- point scales are plagued by method effects. Moreover, an application from electoral research illustrates that the choice of response formats affects substantive interpretations about the nature of the left-right dimension. Since all three scales perform about equally well in terms of the ease of administration, the findings suggest that the 11-point left-right scale should be used in survey research.
EvidenceâBased Survey Design: The Use of Continuous Rating Scales in Surveys
When practitioners and researchers develop structured surveys, they may use Likert-type discrete rating scales or continuous rating scales. When administering surveys via the web, it is important to assess the value of using continuous rating scales such as VASs or sliders. Our close examination of the literature on the effectiveness of the two types of rating scales showed benefits and drawbacks. Many studies recommended against using sliders due to functional difficulties causing low response rates
Development of an instrument for measuring different types of cognitive load
According to cognitive load theory, instructions can impose three types of cognitive load on the learner: intrinsic load, extraneous load, and germane load. Proper measurement of the different types of cognitive load can help us understand why the effectiveness and efficiency of learning environments may differ as a function of instructional formats and learner characteristics. In this article, we present a ten-item instrument for the measurement of the three types of cognitive load. Principal component analysis on data from a lecture in statistics for PhD students (n = 56) in psychology and health sciences revealed a three-component solution, consistent with the types of load that the different items were intended to measure. This solution was confirmed by a confirmatory factor analysis of data from three lectures in statistics for different cohorts of bachelor students in the social and health sciences (ns = 171, 136, and 148), and received further support from a randomized experiment with university freshmen in the health sciences (n = 58)
Recommended from our members
Product quality risk perceptions and decisions: contaminated pet food and lead-painted toys.
In the context of the recent recalls of contaminated pet food and lead-painted toys in the United States, we examine patterns of risk perceptions and decisions when facing consumer product-caused quality risks. Two approaches were used to explore risk perceptions of the product recalls. In the first approach, we elicited judged probabilities and found that people appear to have greatly overestimated the actual risks for both product scenarios. In the second approach, we applied the psychometric paradigm to examine risk perception dimensions concerning these two specific products through factor analysis. There was a similar risk perception pattern for both products: they are seen as unknown risks and are relatively not dread risks. This pattern was also similar to what prior research found for lead paint. Further, we studied people's potential actions to deal with the recalls of these two products. Several factors were found to be significant predictors of respondents' cautious actions for both product scenarios. Policy considerations regarding product quality risks are discussed. For example, risk communicators could reframe information messages to prompt people to consider total risks packed together from different causes, even when the risk message has been initiated due to a specific recall event
Extremity in horizontal and vertical Likert scale format responses. Some evidence on how visual distance between response categories influences extreme responding
In four survey experiments we show that people generally answer more extremely to survey items presented in vertical versus horizontal Likert formats. Our findings suggest that this effect may be at least partly driven by differences in the visual range spanned by the response scale (i.e. the visual distance between endpoint response categories is larger in horizontal than in a vertical format). In addition, compared to traditional horizontal Likert data, vertical Likert data contain more variance, which is mainly non-substantive. As a result, data obtained with scale formats that have different distances between response categories (as is typically the case for vertical vs. horizontal formats) may lead to differences in measurement model parameter estimates like residual terms, and in some cases factor loadings and construct correlations. Based on these results, we provide recommendations on the use of response scale formats in online surveys, bearing in mind that several online survey tool providers promote the use of vertical Likert formats and even automatically change traditional horizontal formats of Likert-type items to vertical Likert formats when viewed on small screens (e.g., on mobile phones)
Marketersâ Use Of Alternative Front-Of-Package Nutrition Symbols: An Examination Of Effects On Product Evaluations
How front-of-package (FOP) nutrition icon systems affect product evaluations for more and less healthful objective nutrition profiles is a critical question facing food marketers, consumers, and the public health community. We propose a conceptually-based hierarchical continuum to guide predictions regarding the effectiveness of several FOP systems currently used in the marketplace. In Studies 1a and 1b, we compare the effects of a broad set of FOP icons on nutrition evaluations linked to health, accuracy of evaluations, and purchase intentions for a single product. Based on these findings, Studies 2 and 3 test the effects of two conceptually-different FOP icon systems in a retail laboratory in which consumers make comparative evaluations of multiple products at the retail shelf. While there are favorable effects of each system beyond control conditions with no FOP icons, results show that icons with an evaluative component that aid consumersâ interpretations generally provide greater benefits (particularly in product comparison contexts). We offer implications for consumer packaged goods marketers, retailers, and the public policy and consumer health communities
- âŠ