14,896 research outputs found

    Multimodal Representation of Space in the Posterior Parietal Cortex and its use in Planning Movements

    Get PDF
    Recent experiments are reviewed that indicate that sensory signals from many modalities, as well as efference copy signals from motor structures, converge in the posterior parietal cortex in order to code the spatial locations of goals for movement. These signals are combined using a specific gain mechanism that enables the different coordinate frames of the various input signals to be combined into common, distributed spatial representations. These distributed representations can be used to convert the sensory locations of stimuli into the appropriate motor coordinates required for making directed movements. Within these spatial representations of the posterior parietal cortex are neural activities related to higher cognitive functions, including attention. We review recent studies showing that the encoding of intentions to make movements is also among the cognitive functions of this area

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation

    Interior maps in posterior pareital cortex

    Get PDF
    The posterior parietal cortex (PPC), historically believed to be a sensory structure, is now viewed as an area important for sensory-motor integration. Among its functions is the forming of intentions, that is, high-level cognitive plans for movement. There is a map of intentions within the PPC, with different subregions dedicated to the planning of eye movements, reaching movements, and grasping movements. These areas appear to be specialized for the multisensory integration and coordinate transformations required to convert sensory input to motor output. In several subregions of the PPC, these operations are facilitated by the use of a common distributed space representation that is independent of both sensory input and motor output. Attention and learning effects are also evident in the PPC. However, these effects may be general to cortex and operate in the PPC in the context of sensory-motor transformations

    Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

    Full text link
    This paper presents a self-supervised method for visual detection of the active speaker in a multi-person spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multi-person face-to-face interaction dataset. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System

    Haptic Touch and Hand Ability

    Get PDF

    Multisensory self-motion processing in humans

    Get PDF
    Humans obtain and process sensory information from various modalities to ensure successful navigation through the environment. While visual, vestibular, and auditory self-motion perception have been extensively investigated, studies on tac-tile self-motion perception are comparably rare. In my thesis, I have investigated tactile self-motion perception and its interaction with the visual modality. In one of two behavioral studies, I analyzed the influence of a tactile heading stimulus intro-duced as a distractor on visual heading perception. In the second behavioral study, I analyzed visuo-tactile perception of self-motion direction (heading). In both studies, visual self-motion was simulated as forward motion over a 2D ground plane. Tactile self-motion was simulated by airflow towards the subjects’ forehead, mimicking the experience of travel wind, e.g., during a bike ride. In the analysis of the subjects’ perceptual reports, I focused on possible visuo-tactile interactions and applied dif-ferent models to describe the integration of visuo-tactile heading stimuli. Lastly, in a functional magnetic resonance imaging study (fMRI), I investigated neural correlates of visual and tactile perception of traveled distance (path integration) and its modu-lation by prediction and cognitive task demands. In my first behavioral study, subjects indicated perceived heading from uni-modal visual (optic flow), unimodal tactile (tactile flow) or from a combination of stimuli from both modalities, simulating either congruent or incongruent heading (bimodal condition). In the bimodal condition, the subjects’ task was to indicate visually perceived heading. Hence, here tactile stimuli were behaviorally irrelevant. In bimodal trials, I found a significant interaction of stimuli from both modalities. Visually perceived heading was biased towards tactile heading direction for an offset of up to 10° between both heading directions. The relative weighting of stimuli from both modalities in the visuo-tactile in-teraction were examined in my second behavioral study. Subjects indicated per-ceived heading from unimodal visual, unimodal tactile and bimodal trials. Here, in bimodal trials, stimuli form both modalities were presented as behaviorally rele-vant. By varying eye- relative to head position during stimulus presentation, possi-ble influences of different reference frames of the visual and tactile modality were investigated. In different sensory modalities, incoming information is encoded rela-tive to the reference system of the receiving sensory organ (e.g., relative to the reti-na in vision or relative to the skin in somatosensation). In unimodal tactile trials, heading perception was shifted towards eye-position. In bimodal trials, varying head- and eye-position had no significant effect on perceived heading: subjects indicated perceived heading based on both, the vis-ual and tactile stimulus, independently of the behavioral relevance of the tactile stimulus. In sum, results of both studies suggest that the tactile modality plays a greater role in self-motion perception than previously thought. Besides the perception of travel direction (heading), information about trav-eled speed and duration are integrated to achieve a measure of the distance trav-eled (path integration). One previous behavioral study has shown that tactile flow can be used for the reproduction of travel distance (Churan et al., 2017). However, studies on neural correlates of tactile distance encoding in humans are lacking en-tirely. In my third study, subjects solved two path integration tasks from unimodal visual and unimodal tactile self-motion stimuli. Brain activity was measured by means of functional magnetic resonance imaging (fMRI). Both tasks varied in the engagement of cognitive task demands. In the first task, subjects replicated (Active trial) a previously observed traveled distance (Passive trial) (= Reproduction task). In the second task, subjects traveled a self-chosen distance (Active trial) which was then recorded and played back to them (Passive trial) (= Self task). The predictive coding theory postulates an internal model which creates predictions about sensory outcomes-based mismatches between predictions and sensory input which enables the system to sharpen future predictions (Teufel et al., 2018). Recent studies sug-gested a synergistical interaction between prediction and cognitive demands, there-by reversing the attenuating effect of prediction. In my study, this hypothesis was tested by manipulating cognitive demands between both tasks. For both tasks, Ac-tive trials compared to Passive trials showed BOLD enhancement of early sensory cortices and suppression of higher order areas (e.g., the intraparietal lobule (IPL)). For both modalities, enhancement of early sensory areas might facilitate task solv-ing processes at hand, thereby reversing the hypothesized attenuating effect of pre-diction. Suppression of the IPL indicates this area as an amodal comparator of pre-dictions and incoming self-motion signals. In conclusion, I was able to show that tactile self-motion information, i.e., tactile flow, provides significant information for the processing of two key features of self-motion perception: Heading and path integration. Neural correlates of tactile path-integration were investigated by means of fMRI, showing similarities between visual and tactile path integration on early processing stages as well as shared neu-ral substrates in higher order areas located in the IPL. Future studies should further investigate the perception of different self-motion parameters in the tactile modali-ty to extend the understanding of this less researched – but important – modality

    Contribution of the idiothetic and the allothetic information to the hippocampal place code

    Get PDF
    Hippocampal cells exhibit preference to be active at a specific place in a familiar environment, enabling them to encode the representation of space within the brain at the population level (J. O’Keefe and Dostrovsky 1971). These cells rely on the external sensory inputs and self-motion cues, however, it is still not known how exactly these inputs interact to build a stable representation of a certain location (“place field”). Existing studies suggest that both proprioceptive and other idiothetic types of information are continuously integrated to update the self-position (e.g. implementing “path integration”) while other stable sensory cues provide references to update the allocentric position of self and correct it for the collected integration-related errors. It was shown that both allocentric and idiothetic types of information influence positional cell firing, however in most of the studies these inputs were firmly coupled. The use of virtual reality setups (Thurley and Ayaz 2016) made it possible to separate the influence of vision and proprioception for the price of not keeping natural conditions - the animal is usually head- or body-fixed (Hölscher et al. 2005; Ravassard A. 2013; Jayakumar et al. 2018a; Haas et al. 2019), which introduces vestibular motor- and visual- conflicts, providing a bias for space encoding. Here we use the novel CAVE Virtual Reality system for freely-moving rodents (Del Grosso 2018) that allows to investigate the effect of visual- and positional- (vestibular) manipulation on the hippocampal space code while keeping natural behaving conditions. In this study, we focus on the dynamic representation of space when the visual- cue-defined and physical-boundary-defined reference frames are in conflict. We confirm the dominance of one reference frame over the other on the level of place fields, when the information about one reference frame is absent (Gothard et al. 2001). We show that the hippocampal cells form adjacent categories by their input preference - surprisingly, not only that they are being driven either by visual / allocentric information or by the distance to the physical boundaries and path integration, but also by a specific combination of both. We found a large category of units integrating inputs from both allocentric and idiothetic pathways that are able to represent an intermediate position between two reference frames, when they are in conflict. This experimental evidence suggests that most of the place cells are involved in representing both reference frames using a weighted combination of sensory inputs. In line with the studies showing dominance of the more reliable sensory modality (Kathryn J. Jeffery and J. M. O’Keefe 1999; Gothard et al. 2001), our data is consistent (although not proving it) with CA1 cells implementing an optimal Bayesian coding given the idiothetic and allocentric inputs with weights inversely proportional to the availability of the input, as proposed for other sensory systems (Kate J. Jeffery, Page, and Simon M. Stringer 2016). This mechanism of weighted sensory integration, consistent with recent dynamic loop models of the hippocampal-entorhinal network (Li, Arleo, and Sheynikhovich 2020), can contribute to the physiological explanation of Bayesian inference and optimal combination of spatial cues for localization (Cheng et al. 2007)

    Decoding odor quality and intensity in the Drosophila brain

    Get PDF
    To internally reflect the sensory environment, animals create neural maps encoding the external stimulus space. From that primary neural code relevant information has to be extracted for accurate navigation. We analyzed how different odor features such as hedonic valence and intensity are functionally integrated in the lateral horn (LH) of the vinegar fly, Drosophila melanogaster. We characterized an olfactory-processing pathway, comprised of inhibitory projection neurons (iPNs) that target the LH exclusively, at morphological, functional and behavioral levels. We demonstrate that iPNs are subdivided into two morphological groups encoding positive hedonic valence or intensity information and conveying these features into separate domains in the LH. Silencing iPNs severely diminished flies' attraction behavior. Moreover, functional imaging disclosed a LH region tuned to repulsive odors comprised exclusively of third-order neurons. We provide evidence for a feature-based map in the LH, and elucidate its role as the center for integrating behaviorally relevant olfactory information

    From sensory perception to spatial cognition

    Get PDF
    To interact with the environmet, it is crucial to have a clear space representation. Several findings have shown that the space around our body is split in several portions, which are differentially coded by the brain. Evidences of such subdivision have been reported by studies on people affected by neglect, on space near (peripersonal) and far (extrapersonal) to the body position and considering space around specific different portion of the body. Moreover, recent studies showed that sensory modalities are at the base of important cognitive skills. However, it is still unclear if each sensory modality has a different role in the development of cognitive skills in the several portions of space around the body. Recent works showed that the visual modality is crucial for the development of spatial representation. This idea is supported by studies on blind individuals showing that visual information is fundamental for the development of auditory spatial representation. For example, blind individuals are not able to perform the spatial bisection task, a task that requires to build an auditory spatial metric, a skill that sighted children acquire around 6 years of age. Based these prior researches, we hypothesize that if different sensory modalities have a role on the devlopment of different cognitive skills, then we should be able to find a clear correlation between availability of the sensory modality and the cognitive skill associated. In particular we hypothesize that the visual information is crucial for the development of auditory space represnetation; if this is true, we should find different spatial skill between front and back spaces. In this thesis, I provide evidences that spaces around our body are differently influenced by sensory modalities. Our results suggest that visual input have a pivotal role in the development of auditory spatial representation and that this applies only to the frontal space. Indeed sighted people are less accurated in spatial task only in space where vision is not present (i.e. the back), while blind people show no differences between front and back spaces. On the other hand, people tend to report sounds in the back space, suggesting that the role of hearing in allertness could be more important in the back than frontal spaces. Finally, we show that natural training, stressing the integration of audio motor stimuli, can restore spatial cognition, opening new possibility for rehabilitation programs. Spatial cognition is a well studied topic. However, we think our findings fill the gap regarding how the different availibility of sensory information, across spaces, causes the development of different cognitive skills in these spaces. This work is the starting point to understand the strategies that the brain adopts to maximize its resources by processing, in the more efficient way, as much information as possible
    • …
    corecore