2,891 research outputs found

    Drawing enhances cross-modal memory plasticity in the human brain: a case study in a totally blind adult

    Get PDF
    In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or “sketchpad,” for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw

    The Nature of Consciousness in the Visually Deprived Brain

    Get PDF
    Vision plays a central role in how we represent and interact with the world around us. The primacy of vision is structurally imbedded in cortical organization as about one-third of the cortical surface in primates is involved in visual processes. Consequently, the loss of vision, either at birth or later in life, affects brain organization and the way the world is perceived and acted upon. In this paper, we address a number of issues on the nature of consciousness in people deprived of vision. Do brains from sighted and blind individuals differ, and how? How does the brain of someone who has never had any visual perception form an image of the external world? What is the subjective correlate of activity in the visual cortex of a subject who has never seen in life? More in general, what can we learn about the functional development of the human brain in physiological conditions by studying blindness? We discuss findings from animal research as well from recent psychophysical and functional brain imaging studies in sighted and blind individuals that shed some new light on the answers to these questions

    Time- but not sleep-dependent consolidation promotes the emergence of cross-modal conceptual representations

    Get PDF
    Conceptual knowledge about objects comprises a diverse set of multi-modal and generalisable information, which allows us to bring meaning to the stimuli in our environment. The formation of conceptual representations requires two key computational challenges: integrating information from different sensory modalities and abstracting statistical regularities across exemplars. Although these processes are thought to be facilitated by offline memory consolidation, investigations into how cross-modal concepts evolve offline, over time, rather than with continuous category exposure are still missing. Here, we aimed to mimic the formation of new conceptual representations by reducing this process to its two key computational challenges and exploring its evolution over an offline retention period. Participants learned to distinguish between members of two abstract categories based on a simple one-dimensional visual rule. Underlying the task was a more complex hidden indicator of category structure, which required the integration of information across two sensory modalities. In two experiments we investigated the impact of time- and sleep-dependent consolidation on category learning. Our results show that offline memory consolidation facilitated cross-modal category learning. Surprisingly, consolidation across wake, but not across sleep showed this beneficial effect. By demonstrating the importance of offline consolidation the current study provided further insights into the processes that underlie the formation of conceptual representations

    Mental Imagery Follows Similar Cortical Reorganization as Perception: Intra-Modal and Cross-Modal Plasticity in Congenitally Blind

    Get PDF
    Cortical plasticity in congenitally blind individuals leads to cross-modal activation of the visual cortex and may lead to superior perceptual processing in the intact sensory domains. Although mental imagery is often defined as a quasi-perceptual experience, it is unknown whether it follows similar cortical reorganization as perception in blind individuals. In this study, we show that auditory versus tactile perception evokes similar intra-modal discriminative patterns in congenitally blind compared with sighted participants. These results indicate that cortical plasticity following visual deprivation does not influence broad intra-modal organization of auditory and tactile perception as measured by our task. Furthermore, not only the blind, but also the sighted participants showed cross-modal discriminative patterns for perception modality in the visual cortex. During mental imagery, both groups showed similar decoding accuracies for imagery modality in the intra-modal primary sensory cortices. However, no cross-modal discriminative information for imagery modality was found in early visual cortex of blind participants, in contrast to the sighted participants. We did find evidence of cross-modal activation of higher visual areas in blind participants, including the representation of specific-imagined auditory features in visual area V4

    Multisensory Processes: A Balancing Act across the Lifespan.

    Get PDF
    Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales

    Hearing faces: how the infant brain matches the face it sees with the speech it hears

    Get PDF
    Speech is not a purely auditory signal. From around 2 months of age, infants are able to correctly match the vowel they hear with the appropriate articulating face. However, there is no behavioral evidence of integrated audiovisual perception until 4 months of age, at the earliest, when an illusory percept can be created by the fusion of the auditory stimulus and of the facial cues (McGurk effect). To understand how infants initially match the articulatory movements they see with the sounds they hear, we recorded high-density ERPs in response to auditory vowels that followed a congruent or incongruent silently articulating face in 10-week-old infants. In a first experiment, we determined that auditory–visual integration occurs during the early stages of perception as in adults. The mismatch response was similar in timing and in topography whether the preceding vowels were presented visually or aurally. In the second experiment, we studied audiovisual integration in the linguistic (vowel perception) and nonlinguistic (gender perception) domain. We observed a mismatch response for both types of change at similar latencies. Their topographies were significantly different demonstrating that cross-modal integration of these features is computed in parallel by two different networks. Indeed, brain source modeling revealed that phoneme and gender computations were lateralized toward the left and toward the right hemisphere, respectively, suggesting that each hemisphere possesses an early processing bias. We also observed repetition suppression in temporal regions and repetition enhancement in frontal regions. These results underscore how complex and structured is the human cortical organization which sustains communication from the first weeks of life on

    The Origins and Development of Visual Categorization

    Get PDF
    Forming categories is a core part of human cognition, allowing us to make quickly make inferences about our environment. This thesis investigated some of the major theoretical interpretations surrounding the neural basis of visual category development. In adults, there are category-selective regions (e.g. in ventral temporal cortex) and networks (which include regions outside traditional visual regions—e.g. the amygdala) that support visual categorization. While there has been extensive behavioural work investigating visual categorization in infants, the neural sequence of development remains poorly understood. Based on behavioral experiments, one view holds that infants are initially using subcortical structures to recognize faces. Indeed, it has been proposed that the subcortical pathway remains active for rapid face detection in adults. In order to test this in adults, I exploited the nasal-temporal asymmetry of the proposed retinocollicular pathway to see if preferentially presenting stimuli to the nasal hemiretina resulted in a fast face detection advantage when contrasted with presentations to the temporal hemiretina. Across four experiments, I failed to find any evidence of a subcortical advantage but still found that a rapid, coarse pathway exists. Therefore, I moved to investigate the development of the cortical visual categorization regions in the ventral temporal cortex (VTC). I characterised the maturity of the face, place and tool regions found in the VTC, looking at the long-range connectivity in 1-9 month-old infants using MRI tractography and a linear discriminant classifier. The face and place regions showed adult-like connectivity throughout infancy, but the tool-network underwent significant maturation until 9 months. Finally, given this maturity of face and place regions in early infancy, I decided to test whether the organization of the VTC was related to the sequence of categories infants acquire. I used language age of acquisition measurements, determining that infants produce significantly more animate than inanimate words up until 29-months, in line with the animacy distinction in the VTC. My work demonstrates the surprising role and maturity of the cortical regions and networks involved in visual categorization. My thesis develops new methods for studying the infant brain and underscores the utility of publicly available data when studying development

    Cognitive social and affective neuroscience of patients with spinal cord injury

    Get PDF
    A successful human-environment interaction requires a continuous integration of information concerning body parts, object features and affective dynamics. Multiple neuropsychological studies show that tools can be integrated into the representation of one's own body. In particular, a tool that participates in the conscious movement of the person is added to the dynamic representation the body – often called “Body schema” – and may even affect social interaction. In light of this the wheelchair is treated as an extension of the disabled body, essentially replacing limbs that don't function properly, but it can also be a symbol of frailty and weakness. In a series of experiments, I studied plastic changes of action, tool and body representation in individuals with spinal cord injury (SCI). Due to their peripheral loss of sensorimotor functions, in the absence of brain lesions and spared higher order cognitive functions, these patients represent an excellent model to study this topic in a multi-faceted way, investigating both fundamental mechanisms and possible therapeutic interventions. In a series of experiments, I developed new behavioral methods to measure the phenomenological aspects of tool embodiment (Chapter 3), to study its functional and neural correlates (Chapter 4) and to assess the possible computational model underpinning these phenomena (Chapter 5). These tasks have been used to describe changes in tool, action and body representation following the injury (Chapter 3 and 4), but also social interactions (Chapter 7), with the aim of giving a complete portrait of change following such damage. I found that changes in the function (wheelchair use) and the structure (body brain disconnection) of the physical body, plastically modulate tool, action and body representation. Social context and social interaction are also shaped by the new configuration of bodily representations. Such a high degree of plasticity suggests that our sense of body is not given at once, but rather it is constantly constructed and adapted through experience

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain
    • 

    corecore