428 research outputs found

    Natural and Artificial Object Recognition: The Superiority of Natural Shape Features

    Get PDF
    The purpose of this study was to evaluate the ability of 44 young adults (mean age = 21.7 years) to compare natural and artificial three-dimensional (3-D) objects using their senses of vision and touch. Previous research has indicated that the information content provided by a stimulus set can have a significant effect on a participant’s ability to perform cross-modal object recognition tasks. A primary goal of the present study was to understand what shape features are transferable between visual and haptic modalities. Participants haptically manipulated objects from one of two stimulus sets: bell peppers (Capsicum annuum) and sinusoidally-modulated spheres (SIMS). Then they indicated which of the 12 simultaneously visible objects possessed the same shape. It was found that the participants’ shape-matching performance was significantly higher for the bell pepper condition compared to the SIMS (t(42) = 11.8, p \u3c 0.000001). These results demonstrate that while young adults can reliably match the solid shape of objects across the sensory modalities of vision and touch, the obtained performance depends critically upon the mathematical characteristics of the solid shapes that are utilized

    Using Multivariate Pattern Analysis to Investigate the Neural Representation of Concepts With Visual and Haptic Features

    Get PDF
    A fundamental debate in cognitive neuroscience concerns how conceptual knowledge is represented in the brain. Over the past decade, cognitive theorists have adopted explanations that suggest cognition is rooted in perception and action. This is called the embodiment hypothesis. Theories of conceptual representation differ in the degree to which representations are embodied, from those which suggest conceptual representation requires no involvement of sensory and motor systems to those which suggest it is entirely dependent upon them. This work investigated how the brain represents concepts that are defined by their visual and haptic features using novel multivariate approaches to the analysis of functional magnetic resonance imaging (fMRI) data. A behavioral study replicated a perceptual phenomenon, known as the tactile disadvantage, demonstrating that that verifying the properties of concepts with haptic features takes significantly longer than verifying the properties of concepts with visual features. This study suggested that processing the perceptual properties of concepts likely recruits the same processes involved in perception. A neuroimaging study using the same paradigm showed that processing concepts with visual and haptic features elicits activity in bimodal object-selective regions, such as the fusiform gyrus (FG) and the lateral occipitotemporal cortex (LOC). Multivariate pattern analysis (MVPA) was successful at identifying whether a concept had perceptual or abstract features from patterns of brain activity located in functionally-defined object-selective and general perceptual regions in addition to the whole brain. The conceptual representation was also consistent across participants. Finally, the functional networks for verifying the properties of concepts with visual and haptic features were highly overlapping but showed differing patterns of connectivity with the occipitotemporal cortex across people. Several conclusions can be drawn from this work, which provide insight into the nature of the neural representation of concepts with perceptual features. The neural representation of concepts with visual and haptic features involves brain regions which underlie general visual and haptic perception as well visual and haptic perception of objects. These brain regions interact differently based on the type of perceptual feature a concept possesses. Additionally, the neural representation of concepts with visual and haptic features is distributed across the whole brain and is consistent across people. The results of this work provide partial support for weak and strong embodiment theories, but further studies are necessary to determine whether sensory systems are required for conceptual representation

    Tactual perception: a review of experimental variables and procedures

    Get PDF
    This paper reviews literature on tactual perception. Throughout this review we will highlight some of the most relevant variables in touch literature: interaction between touch and other senses; type of stimuli, from abstract stimuli such as vibrations, to two- and three-dimensional stimuli, also considering concrete stimuli such as the relation between familiar and unfamiliar stimuli or the haptic perception of faces; type of participants, separating studies with blind participants, studies with children and adults, and an analysis of sex differences in performance; and finally, type of tactile exploration, considering conditions of active and passive touch, the relevance of movement in touch and the relation between exploration and time. This review intends to present an organised overview of the main variables in touch experiments, attending to the main findings described in literature, to guide the design of future works on tactual perception and memory.This work was funded by the Portuguese “Foundation for Science and Technology” through PhD scholarship SFRH/BD/35918/2007

    Visuohaptic convergence in a corticocerebellar network

    Full text link
    The processing of visual and haptic inputs, occurring either separately or jointly, is crucial for everyday-life object recognition, and has been a focus of recent neuroimaging research. Previously, visuohaptic convergence has been mostly investigated with matching-task paradigms. However, much less is known about visuohaptic convergence in the absence of additional task demands. We conducted two functional magnetic resonance imaging experiments in which subjects actively touched and/or viewed unfamiliar object stimuli without any additional task demands. In addition, we performed two control experiments with audiovisual and audiohaptic stimulation to examine the specificity of the observed visuohaptic convergence effects. We found robust visuohaptic convergence in bilateral lateral occipital cortex and anterior cerebellum. In contrast, neither the anterior cerebellum nor the lateral occipital cortex showed any involvement in audiovisual or audiohaptic convergence, indicating that multisensory convergence in these regions is specifically geared to visual and haptic inputs. These data suggest that in humans the lateral occipital cortex and the anterior cerebellum play an important role in visuohaptic processing even in the absence of additional task demands

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    Seeing shapes and hearing textures: Two neural categories of touch

    Get PDF
    Touching for shape recognition has been shown to activate occipital areas in addition to somatosensory areas. In this study we asked if this combination of somatosensory and other sensory processing areas also exist in other kinds of touch recognition. In particular, does touch for texture roughness matching activate other sensory processing areas apart from somatosensory areas? We addressed this question with functional magnetic resonance imaging (fMRI) using wooden abstract stimulus objects whose shape or texture were to be identified. The participants judged if pairs of objects had the same shape or the same texture. We found that the activated brain areas for texture and shape matching have similar underlying structures, a combination of the primary motor area and somatosensory areas. Areas associated with object-shape processing were activated between stimuli during shape matching and not texture roughness matching, while auditory areas were activated during encoding of texture and not for shape stimuli. Matching of textures also involves left BA47, an area associated with retrieval of relational information. We suggest that texture roughness is recognized in a framework of ordering. Left-lateralized activations favoring texture might reflect semantic processing associated with grading roughness quantitatively, as opposed to the more qualitative distinctions between shapes.publishedVersio

    The role of visual processing in haptic representation - Recognition tasks with novel 3D objects

    Get PDF
    In perceiving and recognizing everyday objects we use different senses combined together (multisensory process). However, in the past authors concentrated almost completely on vision. However, it is also true that we can touch objects in order to acquire a whole series of information. Moreover, the combination of these two sensory modalities provides complete information about the explored object. So, I first analyzed the available literature on visual and haptic object representation and recognition separately; then, I concentrated on crossmodal visuo-haptic object representation. Finally I exposed and discussed the results obtained with the three experiments I conducted during my Ph.D. studies. These seem to be in line, as already previously proposed by different authors (Newell et al., 2005; Cattaneo et al. 2008; Lacey et al., 2009), with the existence of a supramodal object representation, which is unhooked from the encoding sensory moodality
    corecore