1,520 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Enhancing the use of Haptic Devices in Education and Entertainment

    Get PDF
    This research was part of the two-years Horizon 2020 European Project "weDRAW". The aim of the project was that "specific sensory systems have specific roles to learn specific concepts". This work explores the use of the haptic modality, stimulated by the means of force-feedback devices, to convey abstract concepts inside virtual reality. After a review of the current use of haptic devices in education, available haptic software and game engines, we focus on the implementation of an haptic plugin for game engines (HPGE, based on state of the art rendering library CHAI3D) and its evaluation in human perception experiments and multisensory integration

    Impaired Visual Size-Discrimination in Children with Movement Disorders.

    Get PDF
    Abstract Multisensory integration of spatial information occurs late in childhood, at around eight years ( Gori, Del Viva, Sandini, & Burr, 2008 ). For younger children, the haptic system dominates size discrimination and vision dominates orientation discrimination: the dominance may reflect sensory calibration, and could have direct consequences on children born with specific sensory disabilities. Here we measure thresholds for visual discrimination of orientation and size in children with movement disorders of upper limbs. Visual orientation discrimination was very similar to the age-matched typical children, but visual size discrimination thresholds were far worse, in all eight individuals with early-onset movement disorder. This surprising and counterintuitive result is readily explained by the cross-sensory calibration hypothesis: when the haptic sense is unavailable for manipulation, it cannot be readily used to estimate size, and hence to calibrate the visual experience of size: visual discrimination is subsequently impaired. This complements a previous study showing that non-sighted children have reduced acuity for haptic orientation, but not haptic size, discriminations ( Gori, Sandini, Martinoli, & Burr, 2010 ). Together these studies show that when either vision or haptic manipulation is impaired, the impairment also impacts on complementary sensory systems that are calibrated by that one

    Multisensory Oddity Detection as Bayesian Inference

    Get PDF
    A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm – that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments – involving cues across and within modalities – for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new ‘oddity detection’ paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain

    Doctor of Philosophy

    Get PDF
    dissertationThe study of haptic interfaces focuses on the use of the sense of touch in human-machine interaction. This document presents a detailed investigation of lateral skin stretch at the fingertip as a means of direction communication. Such tactile communication has applications in a variety of situations where traditional audio and visual channels are inconvenient, unsafe, or already saturated. Examples include handheld consumer electronics, where tactile communication would allow a user to control a device without having to look at it, or in-car navigation systems, where the audio and visual directions provided by existing GPS devices can distract the driver's attention away from the road. Lateral skin stretch, the displacement of the skin of the fingerpad in a plane tangent to the fingerpad, is a highly effective means of communicating directional information. Users are able to correctly identify the direction of skin stretch stimuli with skin displacements as small as 0.1 mm at rates as slow as 2 mm/s. Such stimuli can be rendered by a small, portable device suitable for integration into handheld devices. The design of the device-finger interface affects the ability of the user to perceive the stimuli accurately. A properly designed conical aperture effectively constrains the motion of the finger and provides an interface that is practical for use in handheld devices. When a handheld device renders directional tactile cues on the fingerpad, the user must often mentally rotate those cues from the reference frame of the finger to the world-centered reference frame where those cues are to be applied. Such mental rotation incurs a cognitive cost, requiring additional time to mentally process the stimuli. The magnitude of these cognitive costs is a function of the angle of rotation, and of the specific orientations of the arm, wrist and finger. Even with the difficulties imposed by required mental rotations, lateral skin stretch is a promising means of communicating information using the sense of touch with potential to substantially improve certain types of human-machine interaction

    The interaction between motion and texture in the sense of touch

    Get PDF
    Besides providing information on elementary properties of objects, like texture, roughness, and softness, the sense of touch is also important in building a representation of object movement and the movement of our hands. Neural and behavioral studies shed light on the mechanisms and limits of our sense of touch in the perception of texture and motion, and of its role in the control of movement of our hands. The interplay between the geometrical and mechanical properties of the touched objects, such as shape and texture, the movement of the hand exploring the object, and the motion felt by touch, will be discussed in this article. Interestingly, the interaction between motion and textures can generate perceptual illusions in touch. For example, the orientation and the spacing of the texture elements on a static surface induces the illusion of surface motion when we move our hand on it or can elicit the perception of a curved trajectory during sliding, straight hand movements. In this work we present a multiperspective view that encompasses both the perceptual and the motor aspects, as well as the response of peripheral and central nerve structures, to analyze and better understand the complex mechanisms underpinning the tactile representation of texture and motion. Such a better understanding of the spatiotemporal features of the tactile stimulus can reveal novel transdisciplinary applications in neuroscience and haptics
    corecore