9 research outputs found

    Visual enumeration: A bi-directional mapping process between symbolic and non-symbolic representations of number?

    No full text
    Over the last 30 years, numerical estimation has been largely studied. Recently, Castronovo and Seron (2007) proposed the bi-directional mapping hypothesis in order to account for the finding that dependent on the type of estimation task (perception vs. production of numerosities), reverse patterns of performance are found (i.e., under- and over-estimation respectively). Here, we further investigated this hypothesis by submitting adult participants to three types of numerical estimation task: 1) a perception task, in which participants had to estimate the numerosity of a non-symbolic collection; 2) a production task, in which participants had to approximately produce the numerosity of a symbolic numerical input; 3) a reproduction task, in which participants had to reproduce the numerosity of a non-symbolic numerical input. Our results gave further support to the finding that different patterns of performance are found according to the type of estimation task: 1) under-estimation in the perception task; 2) over-estimation in the production task; 3) accurate estimation in the reproduction task. Moreover, correlation analyses revealed that the more a participant under-estimated in the perception task, the more he/she over-estimated in the production task. We discussed these empirical data by showing how they can be accounted by the bi-directional mapping hypothesis (Castronovo & Seron, 2007)

    Transfer of visuomotor adaptation between eye and hand tracking

    No full text
    Prediction turns motor commands into expected sensory consequences, whereas control turns desired consequences into motor commands. Flanagan and colleagues (2003) have shown that subjects can learn to predict before they can actually learn to control. This observation was interpreted as evidence that the update of prediction precedes control in motor learning. Here we investigated the transfer of learning between two visuomotor tasks both requiring adaptation to a 90° rotation. In the first task participants had to track with their eyes a self-moved target whose displacement was driven by random hand motion (see also Landelle et al., 2016). In the other task participants had also to move the hand but this time they were required to move a cursor so as to track an externally moving target (see also Ogawa & Imamizu, 2013). The first task was designed to test the ability of participants to predict novel visual consequences arising from their hand actions (eye tracking task). The second task was designed to monitor their ability to control a cursor along a desired trajectory (hand tracking task). Our preliminary results suggest an asymmetrical transfer of learning between the two tasks. Namely, although prior experience in the hand tracking task enhanced performance in the eye tracking task, prior experience in the eye tracking task did not improve performance in the hand tracking task. A possible scheme to account for these results is that visuomotor adaptation in our hand tracking task requires both the update of a forward and inverse model (Wolpert & Kawato, 1998), whereas adaptation in our eye tracking task relies solely on the update of a forward model. At a more general level these results emphasize that our ability to predict sensory consequences of hand movements can be improved without necessarily improving our ability to control hand movements

    The impact of blindness onset on the connectivity profile of the occipital cortex

    No full text
    Resting state functional connectivity (rs-FC) has been widely used to investigate the functional (re)organization of the “visual” cortex in blind people. However, discrepant results have emerged with some studies pointing to massive changes in the connectivity profile of occipital regions in blind individuals while other studies showing similar pattern of occipital connectivity in the blind and the sighted. Moreover, the impact of the onset of blindness on these measures remains poorly understood. This question is however crucial to understand if there is a sensitive period in development for reorganizing occipital networks. In this study we investigated the functional connectivity changes between occipital regions and the rest of the brain in early blind (EB), late blind (LB), and their matched controls using fMRI data acquired at rest. We relied on a bootstrap Analysis of Stable Clusters (BASC) to subdivide the brain into meaningful functional parcels. Whereas both blind groups showed reorganization of the connectivity profile of the occipital cortex, these connectivity changes were more much more extended in early blind individuals. We demonstrate that certain connectivity changes occur regardless of the blindness onset, while others being specific to early or late blindness. These results were further supported by using multivariate pattern classification showing highly significant classification rate of individuals to their respective groups based on the connectivity fingerprint of their occipital regions. However, we also observed certain sub-regions within the ventral occipital cortex (e.g. PPA) that show a similar connectivity profile across all groups. Altogether, our data suggests regions-specific impacts of blindness onset on the connectivity architecture of the occipital cortex

    Multi-modal representation of visual and auditory motion directions in hMT+/V5.

    No full text
    The human middle temporal area hMT+/V5 is a region of the extrastriate occipital cortex that has long been known to code for the direction of visual motion trajectories. Even if this region has been traditionally considered as purely visual, recent studies suggested that the hMT+/V5 complex could also selectively code for auditory motion. However, the nature of this cross-modal response in hMT+/V5 remains unsolved. In this study, we used functional magnetic resonance imaging (fMRI) to comprehensively investigate the representational format of visual and auditory motion directions in hMT+/V5. Using multivariate pattern analysis, we demonstrate that visual and auditory motion direction can be reliably decoded inside individually localized hMT+/V5. Moreover, we could predict the motion directions in one modality by training the classifier on patterns from the other modality. Such successful cross-modal decoding indicates the presence of shared motion information across the different modalities. Previous studies used successful cross-modal decoding as a proxy for abstracted representation in a brain region. However, relying on series of complementary multivariate analysis, we unambiguously show that brain responses underlying auditory and visual motion direction in hMT+/V5 is highly dissimilar. For instance, our results demonstrated that auditory motion direction patterns are strongly anti-correlated with the visual motion patterns, and that the two modalities can be highly discriminated based on their activity patterns. Moreover, representational similarity analyses demonstrated that modality invariant models poorly fitted our data while models assuming separate pattern geometries between audition and vision strongly correlated with our observed data. Our results demonstrate that hMT+/V5 is a multi-modal region that contains motion information from different modalities. However, while shared information exists across modalities, hMT+/V5 maintains highly separate response geometries for each modality. These results also serve as a timely reminder that observing significant cross-modal decoding is not a proxy for abstracted representation in the brain

    Seeing faces with your ears activates the left fusiform face area, especially when you're blind.

    No full text
    Restoring vision in blind people is an important goal and can be achieved in certain cases, for instance by performing cataract surgeries in children. However, reconnecting the visual system alone is not sufficient; the visual cortex needs to be rewired. In order to fully appreciate visual information, a mental representation of the world needs to be created. Here we are presenting fMRI data from the visual cortex of blind people when they were perceiving faces, houses, and geometric shapes encoded into sounds by means of a sensory substitution device (SSD). Specifically, we focused on selective visual brain areas related to this perception: the fusiform face area (FFA), the lateral occipital complex (LOC) and the parahippocampal place area (PPA). Each area was identified in sighted subjects under visual conditions using a functional localizer consisting of pictures of famous persons, visual 2-D geometric shapes and real houses. Then, region-of-interest analyses were performed on the data acquired in both Congenitally Blind (CB) and Sighted Control (SC) subjects when the SSD was used to discriminate schematic drawings of faces, geometric shapes and houses. Our results indicate that the left LOC was activated under all three conditions in both groups, while the left FFA was activated in CB subjects selectively during the SSD-face discrimination condition. No significant brain activity was found in the PPA in CB or SC subjects at the group level. The specific recruitment of the FFA during the perception of sound-encoded faces in CB subjects shows that they can extract visual information from sound-encoded objects and such perception activates the appropriate module in the visual cortex. Our study also represents new evidence about the developmental constraints on functional specialization in the absence of visual inputs. Meeting abstract presented at VSS 2015

    The balanced act of crossmodal and intramodal plasticity: Enhanced representation of auditory categories in the occipital cortex of early blind people links to reduced temporal coding

    No full text
    Early visual deprivation triggers enhanced representation of auditory information in the occipital cortex. How does this crossmodal plasticity mechanism impact on the temporal cortex that typically involves in similar auditory coding? To address this question, we used fMRI to characterize brain responses of early blind (EB) and sighted control (SC) individuals listening to sounds from four different categories (human, animal, objects and places). Multivariate pattern analysis was used to decode these four classes of stimuli into individually defined occipital and temporal anatomical parcels. We observed opposite effects of early visual deprivation on auditory decoding in occipital and temporal regions. While occipital regions contained more information about sound categories in the blind, the temporal cortex showed higher decoding in the sighted. Moreover, we observed a negative correlation between occipital and temporal decoding of sound categories in EB, suggesting that these intramodal and crossmodal reorganizations might be inter-connected. Interestingly, we also found that this reorganization process mostly arises in the right hemisphere, which is also the most recruited during the task. We therefore suggest that the extension of non-visual functions in the occipital cortex of EB triggers a network-level reorganization that may reduce the computational load of the regions typically coding for the remaining senses

    Decoding auditory motion direction and location in hMT+/V5 and Planum Temporale of sighted and blind individuals

    No full text
    In sighted individuals, a portion of the middle occipito-temporal cortex (hMT+/V5) responds preferentially to visual motion whereas the planum temporale (PT) responds preferentially to auditory motion. In case of early visual deprivation, hMT+/V5 enhances its response tuning toward moving sounds but the impact of early blindness on the PT remains elusive. Moreover, whether hMT+/V5 contains sound direction selectivity and whether the reorganization observed in the blind is motion specific or also involves auditory localization is equivocal. We used fMRI to characterize the brain activity of sighted and early blind individuals listening to left, right, up and down moving and static sounds. To create a vivid and ecological sensation of sound location and motion, we used individual in-ear stereo recordings recorded outside the scanner, that were re-played to the participants in the scanner. Whole-brain univariate analysis revealed preferential responses to auditory motion for both sighted and blind participants in a dorsal fronto-temporo-parietal network including PT, as well as a region overlapping with the most anterior portion of hMT+/V5. Blind participants showed additional preferential response in the more posterior region of hMT+/V5. Multivariate pattern analysis revealed significant decoding of auditory motion direction in independently localized PT and hMT+/V5 in blind and sighted participants. However, classification accuracies in the blind were significantly higher in hMT+/V5 and lower in PT when compared to sighted participants. Interestingly, decoding sound location showed a similar pattern of results even if the accuracies were lower than those obtained from motion directions. Together, these results suggest that early blindness triggers enhanced tuning for auditory motion direction and auditory location in hMT+/V5 regions, which occurs in conjunction with a reduced computational involvement of PT. These results shed important lights on how sensory deprivation triggers a network-level reorganization between occipital and temporal regions typically dedicated to a specific function

    The emergence of cognitive hearing science.

    No full text
    Cognitive Hearing Science or Auditory Cognitive Science is an emerging field of interdisciplinary research concerning the interactions between hearing and cognition. It follows a trend over the last half century for interdisciplinary fields to develop, beginning with Neuroscience, then Cognitive Science, then Cognitive Neuroscience, and then Cognitive Vision Science. A common theme is that an interdisciplinary approach is necessary to understand complex human behaviors, to develop technologies incorporating knowledge of these behaviors, and to find solutions for individuals with impairments that undermine typical behaviors. Accordingly, researchers in traditional academic disciplines, such as Psychology, Physiology, Linguistics, Philosophy, Anthropology, and Sociology benefit from collaborations with each other, and with researchers in Computer Science and Engineering working on the design of technologies, and with health professionals working with individuals who have impairments. The factors that triggered the emergence of Cognitive Hearing Science include the maturation of the component disciplines of Hearing Science and Cognitive Science, new opportunities to use complex digital signal-processing to design technologies suited to performance in challenging everyday environments, and increasing social imperatives to help people whose communication problems span hearing and cognition. Cognitive Hearing Science is illustrated in research on three general topics: (1) language processing in challenging listening conditions; (2) use of auditory communication technologies or the visual modality to boost performance; (3) changes in performance with development, aging, and rehabilitative training. Future directions for modeling and the translation of research into practice are suggested.The definitive version is available at www.blackwell-synergy.com: Stig Arlinger, Thomas Lunner, Björn Lyxell and M Kathleen Pichora-Fuller, The emergence of cognitive hearing science., 2009, Scandinavian journal of psychology, (50), 5, 371-384. http://dx.doi.org/10.1111/j.1467-9450.2009.00753.x Copyright: Blackwell Publishing</p

    The emergence of Cognitive Hearing Science

    No full text
    corecore