534 research outputs found

    Functional heterogeneity in the left lateral posterior parietal cortex during visual and haptic crossmodal dot-surface matching

    Get PDF
    Background Vision and touch are thought to contribute information to object perception in an independent but complementary manner. The left lateral posterior parietal cortex (LPPC) has long been associated with multisensory information processing, and it plays an important role in visual and haptic crossmodal information retrieval. However, it remains unclear how LPPC subregions are involved in visuo‐haptic crossmodal retrieval processing. Methods In the present study, we used an fMRI experiment with a crossmodal delayed match‐to‐sample paradigm to reveal the functional role of LPPC subregions related to unimodal and crossmodal dot‐surface retrieval. Results The visual‐to‐haptic condition enhanced the activity of the left inferior parietal lobule relative to the haptic unimodal condition, whereas the inverse condition enhanced the activity of the left superior parietal lobule. By contrast, activation of the left intraparietal sulcus did not differ significantly between the crossmodal and unimodal conditions. Seed‐based resting connectivity analysis revealed that these three left LPPC subregions engaged distinct networks, confirming their different functions in crossmodal retrieval processing. Conclusion Taken together, the findings suggest that functional heterogeneity of the left LPPC during visuo‐haptic crossmodal dot‐surface retrieval processing reflects that the left LPPC does not simply contribute to retrieval of past information; rather, each subregion has a specific functional role in resolving different task requirements

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    Diagnostic Palpation in Osteopathic Medicine: A Putative Neurocognitive Model of Expertise

    Get PDF
    This thesis examines the extent to which the development of expertise in diagnostic palpation in osteopathic medicine is associated with changes in cognitive processing. Chapter 2 and Chapter 3 review, respectively, the literature on the role of analytical and non-analytical processing in osteopathic and medical clinical decision making; and the relevant research on the use of vision and haptics and the development of expertise within the context of an osteopathic clinical examination. The two studies reported in Chapter 4 examined the mental representation of knowledge and the role of analogical reasoning in osteopathic clinical decision making. The results reported there demonstrate that the development of expertise in osteopathic medicine is associated with the processes of knowledge encapsulation and script formation. The four studies reported in Chapters 5 and 6 investigate the way in which expert osteopaths use their visual and haptic systems in the diagnosis of somatic dysfunction. The results suggest that ongoing clinical practice enables osteopaths to combine visual and haptic sensory signals in a more efficient manner. Such visuo-haptic sensory integration is likely to be facilitated by top-down processing associated with visual, tactile, and kinaesthetic mental imagery. Taken together, the results of the six studies reported in this thesis indicate that the development of expertise in diagnostic palpation in osteopathic medicine is associated with changes in cognitive processing. Whereas the experts’ diagnostic judgments are heavily influenced by top-down, non-analytical processing; students rely, primarily, on bottom-up sensory processing from vision and haptics. Ongoing training and clinical practice are likely to lead to changes in the clinician’s neurocognitive architecture. This thesis proposes an original model of expertise in diagnostic palpation which has implications for osteopathic education. Students and clinicians should be encouraged to appraise the reliability of different sensory cues in the context of clinical examination, combine sensory data from different channels, and consider using both analytical and nonanalytical reasoning in their decision making. Importantly, they should develop their skills of criticality and their ability to reflect on, and analyse their practice experiences in and on action

    The role of visual processing in haptic representation - Recognition tasks with novel 3D objects

    Get PDF
    In perceiving and recognizing everyday objects we use different senses combined together (multisensory process). However, in the past authors concentrated almost completely on vision. However, it is also true that we can touch objects in order to acquire a whole series of information. Moreover, the combination of these two sensory modalities provides complete information about the explored object. So, I first analyzed the available literature on visual and haptic object representation and recognition separately; then, I concentrated on crossmodal visuo-haptic object representation. Finally I exposed and discussed the results obtained with the three experiments I conducted during my Ph.D. studies. These seem to be in line, as already previously proposed by different authors (Newell et al., 2005; Cattaneo et al. 2008; Lacey et al., 2009), with the existence of a supramodal object representation, which is unhooked from the encoding sensory moodality

    Unimodal and crossmodal processing of visual and kinesthetic stimuli in working memory

    Get PDF
    The processing of (object) information in working memory has been intensively investigated in the visual modality (i.e. D’Esposito, 2007; Ranganath, 2006). In comparison, research on kinesthetic/haptic or crossmodal processing in working memory is still sparse. During recognition and comparison of object information across modalities, representations built from one sensory modality have to be matched with representations obtained from other senses. In the present thesis, the questions how object information is represented in unimodal and crossmodal working memory, which processes enable unimodal and crossmodal comparisons, and which neuronal correlates are associated with these processes were addressed. In particular, unimodal and crossmodal processing of visually and kinesthetically perceived object features were systematically investigated in distinct working memory phases of encoding, maintenance, and recognition. At this, the kinesthetic modality refers to the sensory perception of movement direction and spatial position, e.g. of one’s own hand, and is part of the haptic sense. Overall, the results of the present thesis suggest that modality-specific representations and modality-specific processes play a role during unimodal and crossmodal processing of object features in working memory

    Getting the point: tracing worked examples enhances learning

    Get PDF
    Embodied cognition perspectives suggest that pointing and tracing with the index finger may support learning, with basic laboratory research indicating such gestures have considerable effects on information processing in working memory. The present thesis examined whether tracing worked examples could enhance learning through decreased intrinsic cognitive load. In Experiment 1, 56 Year 6 students (mean age = 11.20, SD = .44) were presented with either tracing or no-tracing instructions on parallel lines relationships. The tracing group solved more acquisition phase practice questions and made fewer test phase errors, but otherwise test results were limited by ceiling effects. 42 Year 5 students (mean age = 10.50, SD = .51) were recruited in Experiment 2 to better align the materials with students’ knowledge levels. The tracing group outperformed the non-tracing group at the test and reported lower levels of test difficulty, interpreted as lower levels of intrinsic cognitive load. Experiment 3 recruited 52 Year 6 and Year 7 students (mean age = 12.04, SD = .59) presented with materials on angle relationships of a triangle; the tracing effect was replicated on test scores and errors, but not test difficulty. Experiment 4 used the parallel lines materials to test hypothesized gradients across experimental conditions with 72 Year 5 students (mean age = 9.94, SD = .33), predicting the tracing on the paper group would outperform the tracing above the paper group, who in turn would outperform the non-tracing group. The hypothesized gradient was established across practice questions correctly answered, practice question errors, test questions correctly answered, test question time to solution, and test difficulty self-reports. The results establish that incorporating the haptic input into worked example-based instruction design enhances the worked example effect and that tracing worked examples is a natural, simple yet effective way to enhance novices’ mathematics learning

    Using Multivariate Pattern Analysis to Investigate the Neural Representation of Concepts With Visual and Haptic Features

    Get PDF
    A fundamental debate in cognitive neuroscience concerns how conceptual knowledge is represented in the brain. Over the past decade, cognitive theorists have adopted explanations that suggest cognition is rooted in perception and action. This is called the embodiment hypothesis. Theories of conceptual representation differ in the degree to which representations are embodied, from those which suggest conceptual representation requires no involvement of sensory and motor systems to those which suggest it is entirely dependent upon them. This work investigated how the brain represents concepts that are defined by their visual and haptic features using novel multivariate approaches to the analysis of functional magnetic resonance imaging (fMRI) data. A behavioral study replicated a perceptual phenomenon, known as the tactile disadvantage, demonstrating that that verifying the properties of concepts with haptic features takes significantly longer than verifying the properties of concepts with visual features. This study suggested that processing the perceptual properties of concepts likely recruits the same processes involved in perception. A neuroimaging study using the same paradigm showed that processing concepts with visual and haptic features elicits activity in bimodal object-selective regions, such as the fusiform gyrus (FG) and the lateral occipitotemporal cortex (LOC). Multivariate pattern analysis (MVPA) was successful at identifying whether a concept had perceptual or abstract features from patterns of brain activity located in functionally-defined object-selective and general perceptual regions in addition to the whole brain. The conceptual representation was also consistent across participants. Finally, the functional networks for verifying the properties of concepts with visual and haptic features were highly overlapping but showed differing patterns of connectivity with the occipitotemporal cortex across people. Several conclusions can be drawn from this work, which provide insight into the nature of the neural representation of concepts with perceptual features. The neural representation of concepts with visual and haptic features involves brain regions which underlie general visual and haptic perception as well visual and haptic perception of objects. These brain regions interact differently based on the type of perceptual feature a concept possesses. Additionally, the neural representation of concepts with visual and haptic features is distributed across the whole brain and is consistent across people. The results of this work provide partial support for weak and strong embodiment theories, but further studies are necessary to determine whether sensory systems are required for conceptual representation

    How do humans mediate with the external physical world? From perception to control of articulated objects

    Get PDF
    Many actions in our daily life involve operation with articulated tools. Despite the ubiquity of articulated objects in daily life, human ability in perceiving the properties and control of articulated objects has been merely studied. Articulated objects are composed of links and revolute or prismatic joints. Moving one part of the linkage results in the movement of the other ones. Reaching a position with the tip of a tool requires adapting the motor commands to the change of position of the endeffector different from the action of reaching the same position with the hand. The dynamic properties are complex and variant in the movement of articulated bodies. For instance, apparent mass, a quantity that measures the dynamic interaction of the articulated object, varies as a function of the changes in configuration. An actuated articulated system can generate a static, but position-dependent force field with constant torques about joints. There are evidences that internal models are involved in the perception and control of tools. In the present work, we aim to investigate several aspects of the perception and control of articulated objects and address two questions, The first question is how people perceive the kinematic and dynamic properties in the haptic interaction with articulated objects? And the second question is what effect has seeing the tool on the planning and execution of reaching movements with a complex tool? Does the visual representation of mechanism structures help in the reaching movement and how? To address these questions, 3D printed physical articulated objects and robotic systems have been designed and developed for the psychophysical studies. The present work involves three studies in different aspects of perception and control of articulated objects. We first did haptic size discrimination tasks using three different types of objects, namely, wooden boxes, actuated apparatus with two movable flat surfaces, and large-size pliers, in unimanual, bimanual grounded and bimanual free conditions. We found bimanual integration occurred in particular in the free manipulation of objects. The second study was on the visuo-motor reaching with complex tools. We found that seeing the mechanism of the tool, even briefly at the beginning of the trial, improved the reaching performance. The last study was about force perception, evidences showed that people could take use of the force field at the end-effector to induce the torque about the joints generated by the articulated system
    • 

    corecore