2,041 research outputs found
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149â164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Engineering data compendium. Human perception and performance. User's guide
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
Recommended from our members
The multisensory attentional consequences of tool use : a functional magnetic resonance imaging study
Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects
seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool
to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the
tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used.
Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participantsâ brains using
functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations,
accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional
hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied
significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore,
these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to
respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory
(visual-vibrotactile) interactions in participantsâ behavioural responses significantly predicted the BOLD response in occipital
cortical areas that were also modulated as a function of both visual stimulus position and tool position.
Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is
accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in
enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly
observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional
importance of visuospatial information during human tool use
Should learners use their hands for learning? Results from an eyeâtracking study
Given the widespread use of touch screen devices, the effect of the users' fingers on information processing and learning is of growing interest. The present study drew on cognitive load theory and embodied cognition perspectives to investigate the effects of pointing and tracing gestures on the surface of a multimedia learning instruction. Learning performance, cognitive load and visual attention were examined in a oneâfactorial experimental design with the betweenâsubject factor pointing and tracing gestures. The pointing and tracing group were instructed to use their fingers during the learning phase to make connections between corresponding text and picture information, whereas the control group was instructed not to use their hands for learning. The results showed a beneficial effect of pointing and tracing gestures on learning performance, a significant shift in visual attention and deeper processing of information by the pointing and tracing group, but no effect on subjective ratings of cognitive load. Implications for future research and practice are discussed
Haptic feedback in eye typing
Proper feedback is essential in gaze based interfaces, where the same modality is used for both perception and control. We measured how vibrotactile feedback, a form of haptic feedback, compares with the commonly used visual and auditory feedback in eye typing. Haptic feedback was found to produce results that are close to those of auditory feedback; both were easy to perceive and participants liked both the auditory âclickâ and the tactile âtapâ of the selected key. Implementation details (such as the placement of the haptic actuator) were also found important
Digital haptics improve speed of visual search performance in a dual-task setting.
Dashboard-mounted touchscreen tablets are now common in vehicles. Screen/phone use in cars likely shifts drivers' attention away from the road and contributes to risk of accidents. Nevertheless, vision is subject to multisensory influences from other senses. Haptics may help maintain or even increase visual attention to the road, while still allowing for reliable dashboard control. Here, we provide a proof-of-concept for the effectiveness of digital haptic technologies (hereafter digital haptics), which use ultrasonic vibrations on a tablet screen to render haptic perceptions. Healthy human participants (Nâ=â25) completed a divided-attention paradigm. The primary task was a centrally-presented visual conjunction search task, and the secondary task entailed control of laterally-presented sliders on the tablet. Sliders were presented visually, haptically, or visuo-haptically and were vertical, horizontal or circular. We reasoned that the primary task would be performed best when the secondary task was haptic-only. Reaction times (RTs) on the visual search task were fastest when the tablet task was haptic-only. This was not due to a speed-accuracy trade-off; there was no evidence for modulation of VST accuracy according to modality of the tablet task. These results provide the first quantitative support for introducing digital haptics into vehicle and similar contexts
- âŠ