490 research outputs found

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    Bodily awareness and novel multisensory features

    Get PDF
    According to the decomposition thesis, perceptual experiences resolve without remainder into their different modality-specific components. Contrary to this view, I argue that certain cases of multisensory integration give rise to experiences representing features of a novel type. Through the coordinated use of bodily awareness—understood here as encompassing both proprioception and kinaesthesis—and the exteroceptive sensory modalities, one becomes perceptually responsive to spatial features whose instances couldn’t be represented by any of the contributing modalities functioning in isolation. I develop an argument for this conclusion focusing on two cases: 3D shape perception in haptic touch and experiencing an object’s egocentric location in crossmodally accessible, environmental space

    Tactual perception: a review of experimental variables and procedures

    Get PDF
    This paper reviews literature on tactual perception. Throughout this review we will highlight some of the most relevant variables in touch literature: interaction between touch and other senses; type of stimuli, from abstract stimuli such as vibrations, to two- and three-dimensional stimuli, also considering concrete stimuli such as the relation between familiar and unfamiliar stimuli or the haptic perception of faces; type of participants, separating studies with blind participants, studies with children and adults, and an analysis of sex differences in performance; and finally, type of tactile exploration, considering conditions of active and passive touch, the relevance of movement in touch and the relation between exploration and time. This review intends to present an organised overview of the main variables in touch experiments, attending to the main findings described in literature, to guide the design of future works on tactual perception and memory.This work was funded by the Portuguese “Foundation for Science and Technology” through PhD scholarship SFRH/BD/35918/2007

    Low-level Modality Specific and Higher-order Amodal Processing in the Haptic and Visual Domains

    Get PDF
    The aim of the current study is to further investigate cross- and multi-modal object processing with the intent of increasing our understanding of the differential contributions of modal and amodal object processing in the visual and haptic domains. The project is an identification and information extraction study. The main factors are modality (vision or haptics), stimulus type (tools or animals) and level (naming and output). Each participant went through four different trials: Visual naming and size, Haptic naming and size. Naming consisted of verbally naming the item; Size (size comparison) consisted of verbally indicating if the current item is larger or smaller than a reference object. Stimuli consisted of plastic animals and tools. All stimuli are readily recognizable, and easily be manipulated with one hand. The actual figurines and tools were used for haptic trials, and digital photographs were used for visual trials (appendix 1 and 2). The main aim was to investigate modal and amodal processing in visual and haptic domains. The results suggest a strong effect, of modality type with visual object recognition being faster in comparison to haptic object recognition leading to a modality (visual-haptic) specific effect. It was also observed that tools were processed faster than animals regardless of the modality type. There was interaction reported between the factors supporting the notion that once naming is accomplished, if subsequent size processing, whether it is in the visual or haptic domain, results in similar reaction times this would be an indication of, non-modality specific or amodal processing. Thus, through using animal and tool figurines, we investigated modal and amodal processing in visual and haptic domains

    Diagnostic Palpation in Osteopathic Medicine: A Putative Neurocognitive Model of Expertise

    Get PDF
    This thesis examines the extent to which the development of expertise in diagnostic palpation in osteopathic medicine is associated with changes in cognitive processing. Chapter 2 and Chapter 3 review, respectively, the literature on the role of analytical and non-analytical processing in osteopathic and medical clinical decision making; and the relevant research on the use of vision and haptics and the development of expertise within the context of an osteopathic clinical examination. The two studies reported in Chapter 4 examined the mental representation of knowledge and the role of analogical reasoning in osteopathic clinical decision making. The results reported there demonstrate that the development of expertise in osteopathic medicine is associated with the processes of knowledge encapsulation and script formation. The four studies reported in Chapters 5 and 6 investigate the way in which expert osteopaths use their visual and haptic systems in the diagnosis of somatic dysfunction. The results suggest that ongoing clinical practice enables osteopaths to combine visual and haptic sensory signals in a more efficient manner. Such visuo-haptic sensory integration is likely to be facilitated by top-down processing associated with visual, tactile, and kinaesthetic mental imagery. Taken together, the results of the six studies reported in this thesis indicate that the development of expertise in diagnostic palpation in osteopathic medicine is associated with changes in cognitive processing. Whereas the experts’ diagnostic judgments are heavily influenced by top-down, non-analytical processing; students rely, primarily, on bottom-up sensory processing from vision and haptics. Ongoing training and clinical practice are likely to lead to changes in the clinician’s neurocognitive architecture. This thesis proposes an original model of expertise in diagnostic palpation which has implications for osteopathic education. Students and clinicians should be encouraged to appraise the reliability of different sensory cues in the context of clinical examination, combine sensory data from different channels, and consider using both analytical and nonanalytical reasoning in their decision making. Importantly, they should develop their skills of criticality and their ability to reflect on, and analyse their practice experiences in and on action

    Look but don't touch: Visual cues to surface structure drive somatosensory cortex.

    Get PDF
    When planning interactions with nearby objects, our brain uses visual information to estimate shape, material composition, and surface structure before we come into contact with them. Here we analyse brain activations elicited by different types of visual appearance, measuring fMRI responses to objects that are glossy, matte, rough, or textured. In addition to activation in visual areas, we found that fMRI responses are evoked in the secondary somatosensory area (S2) when looking at glossy and rough surfaces. This activity could be reliably discriminated on the basis of tactile-related visual properties (gloss, rough, and matte), but importantly, other visual properties (i.e., coloured texture) did not substantially change fMRI activity. The activity could not be solely due to tactile imagination, as asking explicitly to imagine such surface properties did not lead to the same results. These findings suggest that visual cues to an object's surface properties evoke activity in neural circuits associated with tactile stimulation. This activation may reflect the a-priori probability of the physics of the interaction (i.e., the expectation of upcoming friction) that can be used to plan finger placement and grasp force.This project was supported by the Wellcome Trust (095183/Z/10/Z).This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.neuroimage.2015.12.05

    Unimodal and crossmodal processing of visual and kinesthetic stimuli in working memory

    Get PDF
    The processing of (object) information in working memory has been intensively investigated in the visual modality (i.e. D’Esposito, 2007; Ranganath, 2006). In comparison, research on kinesthetic/haptic or crossmodal processing in working memory is still sparse. During recognition and comparison of object information across modalities, representations built from one sensory modality have to be matched with representations obtained from other senses. In the present thesis, the questions how object information is represented in unimodal and crossmodal working memory, which processes enable unimodal and crossmodal comparisons, and which neuronal correlates are associated with these processes were addressed. In particular, unimodal and crossmodal processing of visually and kinesthetically perceived object features were systematically investigated in distinct working memory phases of encoding, maintenance, and recognition. At this, the kinesthetic modality refers to the sensory perception of movement direction and spatial position, e.g. of one’s own hand, and is part of the haptic sense. Overall, the results of the present thesis suggest that modality-specific representations and modality-specific processes play a role during unimodal and crossmodal processing of object features in working memory

    The role of visual experience in the emergence of cross-modal correspondences

    Get PDF
    Cross-modal correspondences describe the widespread tendency for attributes in one sensory modality to be consistently matched to those in another modality. For example, high pitched sounds tend to be matched to spiky shapes, small sizes, and high elevations. However, the extent to which these correspondences depend on sensory experience (e.g. regularities in the perceived environment) remains controversial. Two recent studies involving blind participants have argued that visual experience is necessary for the emergence of correspondences, wherein such correspondences were present (although attenuated) in late blind individuals but absent in the early blind. Here, using a similar approach and a large sample of early and late blind participants (N=59) and sighted controls (N=63), we challenge this view. Examining five auditory-tactile correspondences, we show that only one requires visual experience to emerge (pitch-shape), two are independent of visual experience (pitch-size, pitch-weight), and two appear to emerge in response to blindness (pitch-texture, pitch-softness). These effects tended to be more pronounced in the early blind than late blind group, and the duration of vision loss among the late blind did not mediate the strength of these correspondences. Our results suggest that altered sensory input can affect cross-modal correspondences in a more complex manner than previously thought and cannot solely be explained by a reduction in visually-mediated environmental correlations. We propose roles of visual calibration, neuroplasticity and structurally-innate associations in accounting for our findings

    Multisensory Approaches to Restore Visual Functions

    Get PDF
    corecore