158 research outputs found

    Tactual perception: a review of experimental variables and procedures

    Get PDF
    This paper reviews literature on tactual perception. Throughout this review we will highlight some of the most relevant variables in touch literature: interaction between touch and other senses; type of stimuli, from abstract stimuli such as vibrations, to two- and three-dimensional stimuli, also considering concrete stimuli such as the relation between familiar and unfamiliar stimuli or the haptic perception of faces; type of participants, separating studies with blind participants, studies with children and adults, and an analysis of sex differences in performance; and finally, type of tactile exploration, considering conditions of active and passive touch, the relevance of movement in touch and the relation between exploration and time. This review intends to present an organised overview of the main variables in touch experiments, attending to the main findings described in literature, to guide the design of future works on tactual perception and memory.This work was funded by the Portuguese “Foundation for Science and Technology” through PhD scholarship SFRH/BD/35918/2007

    Unimodal and crossmodal processing of visual and kinesthetic stimuli in working memory

    Get PDF
    The processing of (object) information in working memory has been intensively investigated in the visual modality (i.e. D’Esposito, 2007; Ranganath, 2006). In comparison, research on kinesthetic/haptic or crossmodal processing in working memory is still sparse. During recognition and comparison of object information across modalities, representations built from one sensory modality have to be matched with representations obtained from other senses. In the present thesis, the questions how object information is represented in unimodal and crossmodal working memory, which processes enable unimodal and crossmodal comparisons, and which neuronal correlates are associated with these processes were addressed. In particular, unimodal and crossmodal processing of visually and kinesthetically perceived object features were systematically investigated in distinct working memory phases of encoding, maintenance, and recognition. At this, the kinesthetic modality refers to the sensory perception of movement direction and spatial position, e.g. of one’s own hand, and is part of the haptic sense. Overall, the results of the present thesis suggest that modality-specific representations and modality-specific processes play a role during unimodal and crossmodal processing of object features in working memory

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Amplitude and direction errors in kinesthetic pointing

    Get PDF
    We investigated the accuracy with which, in the absence of vision, one can reach again a 2D target location that had been previously identified by a guided movement. A robotic arm guided the participant's hand to a target (locating motion) and away from it (homing motion). Then, the participant pointed freely toward the remembered target position. Two experiments manipulated separately the kinematics of the locating and homing motions. Some robot motions followed a straight path with the bell-shaped velocity profile that is typical of natural movements. Other motions followed curved paths, or had strong acceleration and deceleration peaks. Current motor theories of perception suggest that pointing should be more accurate when the homing and locating motion mimics natural movements. This expectation was not borne out by the results, because amplitude and direction errors were almost independent of the kinematics of the locating and homing phases. In both experiments, participants tended to overshoot the target positions along the lateral directions. In addition, pointing movements towards oblique targets were attracted by the closest diagonal (oblique effect). This error pattern was robust not only with respect to the manner in which participants located the target position (perceptual equivalence), but also with respect to the manner in which they executed the pointing movements (motor equivalence). Because of the similarity of the results with those of previous studies on visual pointing, it is argued that the observed error pattern is basically determined by the idiosyncratic properties of the mechanisms whereby space is represented internall

    Comparison of the haptic and visual deviations in a parallelity task

    Get PDF
    Deviations in both haptic and visual spatial experiments are thought to be caused by a biasing influence of an egocentric reference frame. The strength of this influence is strongly participant-dependent. By using a parallelity test, it is studied whether this strength is modality-independent. In both haptic and visual conditions, large, systematic and participant-dependent deviations were found. However, although the correlation between the haptic and visual deviations was significant, the explained variance due to a common factor was only 20%. Therefore, the degree to which a participant is “egocentric” depends on modality and possibly even more generally, on experimental condition

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Object exploration using vision and active touch

    Get PDF

    The contributions of vision and haptics to reaching and grasping

    Get PDF
    Sherpa Romeo green journal: open accessThis review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference.Ye
    • 

    corecore