46 research outputs found

    Impact of perceptual learning on resting-state fMRI connectivity: A supervised classification study

    Get PDF
    International audiencePerceptual learning sculpts ongoing brain activity [1]. This finding has been observed by statistically comparing the functional connectivity (FC) patterns computed from resting-state functional MRI (rs-fMRI) data recorded before and after intensive training to a visual attention task. Hence, functional connectivity serves a dynamic role in brain function, supporting the consolidation of previous experience. Following this line of research, we trained three groups of individuals to a visual discrimination task during a magneto-encephalography (MEG) experiment [2]. The same individuals were then scanned in rs-fMRI. Here, in a supervised classification framework, we demonstrate that FC metrics computed on rs-fMRI data are able to predict the type of training the participants received. On top of that, we show that the prediction accuracies based on tangent embedding FC measure outperform those based on our recently developed multivariate wavelet-based Hurst exponent estimator [3], which captures low frequency fluctuations in ongoing brain activity too

    An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing

    Get PDF
    The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks

    Enriched learning : Behavior, brain, and computation

    Get PDF
    Open Access via the Elsevier Agreement Funder: German Research Foundation: KR 3735/3-1,MA 9552/1-1 Acknowledgments We thank Agnieszka Konopka, Antje Proske, Joost Rommers, and Anna Zamm for providing useful comments on an earlier version of the manuscript; Mingyuan Chu for feedback on Figure 1; and Stefan Kiebel for feedback on Box 3. This work was supported by the German Research Foundation (grants KR 3735/3-1, KR 3735/3-2, and MA 9552/1-1).Peer reviewedPublisher PD

    Multivariate Hurst Exponent Estimation in FMRI. Application to Brain Decoding of Perceptual Learning

    Get PDF
    International audienceSo far considered as noise in neuroscience, irregular arrhyth-mic field potential activity accounts for the majority of the signal power recorded in EEG or MEG [1, 2]. This brain activity follows a power law spectrum P (f) ∼ 1/f β in the limit of low frequencies, which is a hallmark of scale invariance. Recently, several studies [1, 3–6] have shown that the slope β (or equivalently Hurst exponent H) tends to be modulated by task performance or cognitive state (eg, sleep vs awake). These observations were confirmed in fMRI [7–9] although the short length of fMRI time series makes these findings less reliable. In this paper, to compensate for the slower sampling rate in fMRI, we extend univariate wavelet-based Hurst exponent estimator to a multivariate setting using spatial regular-ization. Next, we demonstrate the relevance of the proposed tools on resting-state fMRI data recorded in three groups of individuals once they were specifically trained to a visual discrimination task during a MEG experiment [10]. In a supervised classification framework, our multivariate approach permits to better predict the type of training the participants received as compared to their univariate counterpart

    La convergence de l'activité neurale vers des attracteurs multifractals localisés prédit la capacité d'apprentissage

    Get PDF
    National audienceDans cet article, nous mettons en évidence les propriétés d'invariance d'échelle (auto-similarité H, multifractalité M) des signaux cérébraux acquis par magnétoencephalographie (MEG) et nous démontrons leur pertinence fonctionnelle dans une tâche complexe de discrimination visuelle en contrastant les situations avant et après apprentissage. L'analyse de ces invariances démontre que le cerveau peut adopter deux stratégies complémentaires pour apprendre efficacement: soit réduire H dans les aires associatives sous-tendant la plasticité neurale, soit faire converger M vers un attracteur asymptotique dans ces mêmes aires. Abstract – In this paper, we provide evidence for scaling properties (self-similarity H, multifractality: M) in the human brain based on brain activity recorded with magnetoencephalography (MEG). We demonstrate the functional relevance of scaling properties during the learning of complex visual discrimination by contrasting before and after training. The analysis of scale-free dynamics show two complementary strategies for efficient learning and plasticity, namely: in those regions showing plasticity, we report a decrease of H and a convergence of M towards asymptotic values

    Coherence and recurrency: maintenance, control and integration in working memory

    Get PDF
    Working memory (WM), including a ‘central executive’, is used to guide behavior by internal goals or intentions. We suggest that WM is best described as a set of three interdependent functions which are implemented in the prefrontal cortex (PFC). These functions are maintenance, control of attention and integration. A model for the maintenance function is presented, and we will argue that this model can be extended to incorporate the other functions as well. Maintenance is the capacity to briefly maintain information in the absence of corresponding input, and even in the face of distracting information. We will argue that maintenance is based on recurrent loops between PFC and posterior parts of the brain, and probably within PFC as well. In these loops information can be held temporarily in an active form. We show that a model based on these structural ideas is capable of maintaining a limited number of neural patterns. Not the size, but the coherence of patterns (i.e., a chunking principle based on synchronous firing of interconnected cell assemblies) determines the maintenance capacity. A mechanism that optimizes coherent pattern segregation, also poses a limit to the number of assemblies (about four) that can concurrently reverberate. Top-down attentional control (in perception, action and memory retrieval) can be modelled by the modulation and re-entry of top-down information to posterior parts of the brain. Hierarchically organized modules in PFC create the possibility for information integration. We argue that large-scale multimodal integration of information creates an ‘episodic buffer’, and may even suffice for implementing a central executive

    Enhanced Recognition of Vocal Emotions in Individuals With Naturally Good Musical Abilities

    Get PDF
    Music training is widely assumed to enhance several nonmusical abilities, including speech perception, executive functions, reading, and emotion recognition. This assumption is based primarily on cross-sectional comparisons between musicians and nonmusicians. It remains unclear, however, whether training itself is necessary to explain the musician advantages, or whether factors such as innate predispositions and informal musical experience could produce similar effects. Here, we sought to clarify this issue by examining the association between music training, music perception abilities and vocal emotion recognition. The sample (N = 169) comprised musically trained and untrained listeners who varied widely in their musical skills, as assessed through self-report and performance-based measures. The emotion recognition tasks required listeners to categorize emotions in nonverbal vocalizations (e.g., laughter, crying) and in speech prosody. Music training was associated positively with emotion recognition across tasks, but the effect was small. We also found a positive association between music perception abilities and emotion recognition in the entire sample, even with music training held constant. In fact, untrained participants with good musical abilities were as good as highly trained musicians at recognizing vocal emotions. Moreover, the association between music training and emotion recognition was fully mediated by auditory and music perception skills. Thus, in the absence of formal music training, individuals who were “naturally” musical showed musician-like performance at recognizing vocal emotions. These findings highlight an important role for factors other than music training (e.g., predispositions and informal musical experience) in associations between musical and nonmusical domains. (PsycInfo Database Record (c) 2020 APA, all rights reserved

    Space, time and motion in a multisensory world

    Get PDF
    When interacting with environmental events, humans acquire information from different senses and combine these inputs within a coherent representation of the world. The present doctoral thesis aims at investigating how humans represent space, time, and motion through auditory and visual sensory modalities. It has been widely demonstrated a predisposition of different sensory systems towards the processing of different domains of representation, with hearing that prevails in representing the time domain and vision that is the most reliable sense for processing the space domain. Given this strong link between sensory modality and domain of representation, one objective of this thesis is to deepen the knowledge of the neural organization of multisensory spatial and temporal skills in healthy adults. In addition, by using blindness as a model to unravel the role of vision in the development of spatio-temporal abilities, this thesis explores the interaction of the spatial and temporal domains in the acoustic motion perception of early blind individuals. The interplay between space and time has also been explained as the result of humans performing actions in the surrounding environment since to carry out goal-directed motor behaviors it is useful for a person to associate the spatial and temporal information of one’s target into a shared mental map. In this regard, the present project also questions how the brain processes spatio-temporal cues of external events when it comes to manually intercepting moving objects with one hand. Finally, in light of the above results, this dissertation incorporates the development of a novel portable device, named MultiTab, for the behavioral evaluation of the processing of space, time, and motor responses, through the visual and acoustic sensory modality. For the purposes of this thesis, four methodological approaches have been employed: i) electroencephalogram (EEG) technique, to explore the cortical activation associated with multisensory spatial and temporal tasks; ii) psychophysical methods, to measure the relationship between stimuli in motion and the acoustic speed perception of blind and sighted individuals; iii) motion capture techniques, to measure indices of movements during an object’s interception task; iv) design and technical-behavioral validation of a new portable device. Studies of the present dissertation indicate the following results. First, this thesis highlights an early cortical gain modulation of sensory areas that depends on the domain of representation to process, with auditory areas mainly involved in the multisensory processing of temporal inputs, and visual areas of spatial inputs. Moreover, for the spatial domain specifically, the neural modulation of visual areas is also influenced by the kind of spatial layout representing multisensory stimuli. Second, this project shows that lack of vision influences the ability to process the speed of moving sounds by altering how blind individuals make use of the sounds’ temporal features. This result suggests that visual experience in the first years of life is a crucial factor when dealing with combined spatio-temporal information. Third, data of this thesis demonstrate that typically developing individuals manually intercepting a moving object with one hand take into consideration the item’s spatio-temporal cues, by adjusting their interceptive movements according to the object’s speed. Finally, the design and validation of MultiTab show its utility in the evaluation of multisensory processing such as the manual localization of audiovisual spatialized stimuli. Overall, findings from this thesis contribute to a more in-depth picture of how the human brain represents space, time, and motion through different senses. Moreover, they provide promising implications in exploring novel technological methods for the assessment and training of these dimensions in typical and atypical populations

    The Impact of Feedback on the Different Time Courses of Multisensory Temporal Recalibration

    Get PDF

    Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

    Get PDF
    Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target's orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing `contextual cueing'. This effect was enhanced in the multisensory session---importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift--diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone
    corecore