10 research outputs found

    AI reflections in 2020

    Get PDF
    We invited authors of selected Comments and Perspectives published in Nature Machine Intelligence in the latter half of 2019 and first half of 2020 to describe how their topic has developed, what their thoughts are about the challenges of 2020, and what they look forward to in 2021.Postprint (author's final draft

    Seeing objects helps us better hear the sounds they make

    No full text

    Man et al. 2015 Brain Images

    No full text
    <p>Brain Images from Man et al. 2015 </p

    Seeing objects helps us better hear the sounds they make

    No full text
    It has been established that lip-reading improves the perception of auditory speech stimuli. But does the visual enhancement of auditory sensitivity extend to “objects” other than speech? In other words, does seeing an object help one hear it better? Here we report a series of psychophysical experiments in humans showing that the visual enhancement of auditory sensitivity generalizes to material objects. We further show that the crossmodal enhancement was modulated by the conscious visualization of the stimulus: we can better hear the sounds an object makes when we are conscious of seeing that object. Our work extends an intriguing crossmodal effect, previously circumscribed to speech, to a wider domain of real-world objects. We also connect the phenomenon of consciousness with functional consequences on the ability of one sensory modality to enhance the sensitivity of another

    Seeing objects helps us better hear the sounds they make

    No full text
    It has been established that lip-reading improves the perception of auditory speech stimuli. But does the visual enhancement of auditory sensitivity extend to “objects” other than speech? In other words, does seeing an object help one hear it better? Here we report a series of psychophysical experiments in humans showing that the visual enhancement of auditory sensitivity generalizes to material objects. We further show that the crossmodal enhancement was modulated by the conscious visualization of the stimulus: we can better hear the sounds an object makes when we are conscious of seeing that object. Our work extends an intriguing crossmodal effect, previously circumscribed to speech, to a wider domain of real-world objects. We also connect the phenomenon of consciousness with functional consequences on the ability of one sensory modality to enhance the sensitivity of another

    Visual enhancement of auditory object perception

    No full text

    Convergent and invariant object representations for sight, sound, and touch

    Full text link
    We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts

    Decoding the Neural Representation of Story Meanings across Languages

    No full text
    Drawing from a common lexicon of semantic units, humans fashion narratives whose meaning transcends that of their individual utterances. However, while brain regions that represent lower-level semantic units, such as words and sentences, have been identified, questions remain about the neural representation of narrative comprehension, which involves inferring cumulative meaning. To address these questions, we exposed English, Mandarin and Farsi native speakers to native language translations of the same stories during fMRI scanning. Using a new technique in natural language processing, we calculated the distributed representations of these stories (capturing the meaning of the stories in high-dimensional semantic space), and demonstrate that using these representations we can identify the specific story a participant was reading from the neural data. Notably, this was possible even when the distributed representations were calculated using stories in a different language than the participant was reading. Relying on over 44 billion classifications, our results reveal that identification relied on a collection of brain regions most prominently located in the default mode network. These results demonstrate that neuro-semantic encoding of narratives happens at levels higher than individual semantic units and that this encoding is systematic across both individuals and languages
    corecore