1,154 research outputs found

    PyMVPA: A Unifying Approach to the Analysis of Neuroscientific Data

    Get PDF
    The Python programming language is steadily increasing in popularity as the language of choice for scientific computing. The ability of this scripting environment to access a huge code base in various languages, combined with its syntactical simplicity, make it the ideal tool for implementing and sharing ideas among scientists from numerous fields and with heterogeneous methodological backgrounds. The recent rise of reciprocal interest between the machine learning (ML) and neuroscience communities is an example of the desire for an inter-disciplinary transfer of computational methods that can benefit from a Python-based framework. For many years, a large fraction of both research communities have addressed, almost independently, very high-dimensional problems with almost completely non-overlapping methods. However, a number of recently published studies that applied ML methods to neuroscience research questions attracted a lot of attention from researchers from both fields, as well as the general public, and showed that this approach can provide novel and fruitful insights into the functioning of the brain. In this article we show how PyMVPA, a specialized Python framework for machine learning based data analysis, can help to facilitate this inter-disciplinary technology transfer by providing a single interface to a wide array of machine learning libraries and neural data-processing methods. We demonstrate the general applicability and power of PyMVPA via analyses of a number of neural data modalities, including fMRI, EEG, MEG, and extracellular recordings

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Hippocampus dependent and independent theta-networks of working memory maintenance

    Get PDF
    Working memory is the ability to briefly maintain and manipulate information beyond its transient availability to our senses. This process of short-term stimulus retention has often been proposed to be anatomically distinct from long-term forms of memory. Although it’s been well established that the medial temporal lobe (MTL) is critical to long-term declarative memory, recent evidence has suggested that MTL regions, such as the hippocampus, may also be involved in the working memory maintenance of configural visual relationships. I investigate this possibility in a series of experiments using Magnetoencephalography to record the cortical oscillatory activity within the theta frequency band of patients with bilateral hippocampal sclerosis and normal controls. The results demonstrate that working memory maintenance of configural-relational information is supported by a theta synchronous network coupling frontal, temporal and occipital visual areas, and furthermore that this theta synchrony is critically dependent on the integrity of the hippocampus. Alternate forms of working memory maintenance, that do not require the relational binding of visual configurations, engage dissociable theta synchronous networks functioning independently of the hippocampus. In closing, I will explore the interactions between long-term and short-term forms of memory and demonstrate that through these interactions, memory performance can effectively be improved

    Interpreting EEG and MEG signal modulation in response to facial features: the influence of top-down task demands on visual processing strategies

    Get PDF
    The visual processing of faces is a fast and efficient feat that our visual system usually accomplishes many times a day. The N170 (an Event-Related Potential) and the M170 (an Event-Related Magnetic Field) are thought to be prominent markers of the face perception process in the ventral stream of visual processing that occur ~ 170 ms after stimulus onset. The question of whether face processing at the time window of the N170 and M170 is automatically driven by bottom-up visual processing only, or whether it is also modulated by top-down control, is still debated in the literature. However, it is known from research on general visual processing, that top-down control can be exerted much earlier along the visual processing stream than the N170 and M170 take place. I conducted two studies, each consisting of two face categorization tasks. In order to examine the influence of top-down control on the processing of faces, I changed the task demands from one task to the next, while presenting the same set of face stimuli. In the first study, I recorded participants’ EEG signal in response to faces while they performed both a Gender task and an Expression task on a set of expressive face stimuli. Analyses using Bubbles (Gosselin & Schyns, 2001) and Classification Image techniques revealed significant task modulations of the N170 ERPs (peaks and amplitudes) and the peak latency of maximum information sensitivity to key facial features. However, task demands did not change the information processing during the N170 with respect to behaviourally diagnostic information. Rather, the N170 seemed to integrate gender and expression diagnostic information equally in both tasks. In the second study, participants completed the same behavioural tasks as in the first study (Gender and Expression), but this time their MEG signal was recorded in order to allow for precise source localisation. After determining the active sources during the M170 time window, a Mutual Information analysis in connection with Bubbles was used to examine voxel sensitivity to both the task-relevant and the task-irrelevant face category. When a face category was relevant for the task, sensitivity to it was usually higher and peaked in different voxels than sensitivity to the task-irrelevant face category. In addition, voxels predictive of categorization accuracy were shown to be sensitive to task-relevant, behaviourally diagnostic facial features only. I conclude that facial feature integration during both N170 and M170 is subject to top-down control. The results are discussed against the background of known face processing models and current research findings on visual processing

    Representational geometry: integrating cognition, computation, and the brain

    Get PDF
    The cognitive concept of representation plays a key role in theories of brain information processing. However, linking neuronal activity to representational content and cognitive theory remains challenging. Recent studies have characterized the representational geometry of neural population codes by means of representational distance matrices, enabling researchers to compare representations across stages of processing and to test cognitive and computational theories. Representational geometry provides a useful intermediate level of description, capturing both the information represented in a neuronal population code and the format in which it is represented. We review recent insights gained with this approach in perception, memory, cognition, and action. Analyses of representational geometry can compare representations between models and the brain, and promise to explain brain computation as transformation of representational similarity structure

    TEMPORAL CODING OF SPEECH IN HUMAN AUDITORY CORTEX

    Get PDF
    Human listeners can reliably recognize speech in complex listening environments. The underlying neural mechanisms, however, remain unclear and cannot yet be emulated by any artificial system. In this dissertation, we study how speech is represented in the human auditory cortex and how the neural representation contributes to reliable speech recognition. Cortical activity from normal hearing human subjects is noninvasively recorded using magnetoencephalography, during natural speech listening. It is first demonstrated that neural activity from auditory cortex is precisely synchronized to the slow temporal modulations of speech, when the speech signal is presented in a quiet listening environment. How this neural representation is affected by acoustic interference is then investigated. Acoustic interference degrades speech perception via two mechanisms, informational masking and energetic masking, which are addressed respectively by using a competing speech stream and a stationary noise as the interfering sound. When two speech streams are presented simultaneously, cortical activity is predominantly synchronized to the speech stream the listener attends to, even if the unattended, competing speech stream is 8 dB more intense. When speech is presented together with spectrally matched stationary noise, cortical activity remains precisely synchronized to the temporal modulations of speech until the noise is 9 dB more intense. Critically, the accuracy of neural synchronization to speech predicts how well individual listeners can understand speech in noise. Further analysis reveals that two neural sources contribute to speech synchronized cortical activity, one with a shorter response latency of about 50 ms and the other with a longer response latency of about 100 ms. The longer-latency component, but not the shorter-latency component, shows selectivity to the attended speech and invariance to background noise, indicating a transition from encoding the acoustic scene to encoding the behaviorally important auditory object, in auditory cortex. Taken together, we have demonstrated that during natural speech comprehension, neural activity in the human auditory cortex is precisely synchronized to the slow temporal modulations of speech. This neural synchronization is robust to acoustic interference, whether speech or noise, and therefore provides a strong candidate for the neural basis of acoustic background invariant speech recognition

    Novel methods to evaluate blindsight and develop rehabilitation strategies for patients with cortical blindness

    Full text link
    20 à 57 % des victimes d'un accident vasculaire cérébral (AVC) sont diagnostiqués aves des déficits visuels qui réduisent considérablement leur qualité de vie. Parmi les cas extrêmes de déficits visuels, nous retrouvons les cécités corticales (CC) qui se manifestent lorsque la région visuelle primaire (V1) est atteinte. Jusqu'à présent, il n'existe aucune approche permettant d'induire la restauration visuelle des fonctions et, dans la plupart des cas, la plasticité est insuffisante pour permettre une récupération spontanée. Par conséquent, alors que la perte de la vue est considérée comme permanente, des fonctions inconscientes mais importantes, connues sous le nom de vision aveugle (blindsight), pourraient être utiles pour les stratégies de réhabilitation visuelle, ce qui suscite un vif intérêt dans le domaine des neurosciences cognitives. La vision aveugle est un phénomène rare qui dépeint une dissociation entre la performance et la conscience, principalement étudiée dans des études de cas. Dans le premier chapitre de cette thèse, nous avons abordé plusieurs questions concernant notre compréhension de la vision aveugle. Comme nous le soutenons, une telle compréhension pourrait avoir une influence significative sur la réhabilitation clinique des patients souffrant de CC. Par conséquent, nous proposons une stratégie unique pour la réhabilitation visuelle qui utilise les principes du jeu vidéo pour cibler et potentialiser les mécanismes neuronaux dans le cadre de l'espace de travail neuronal global, qui est expliqué théoriquement dans l'étude 1 et décrit méthodologiquement dans l'étude 5. En d'autres termes, nous proposons que les études de cas, en conjonction avec des critères méthodologiques améliorés, puissent identifier les substrats neuronaux qui soutiennent la vision aveugle et inconsciente. Ainsi, le travail de cette thèse a fourni trois expériences empiriques (études 2, 3 et 4) en utilisant de nouveaux standards dans l'analyse électrophysiologique qui décrivent les cas de patients SJ présentant une cécité pour les scènes complexes naturelles affectives et ML présentant une cécité pour les stimuli de mouvement. Dans les études 2 et 3, nous avons donc sondé les substrats neuronaux sous-corticaux et corticaux soutenant la cécité affective de SJ en utilisant la MEG et nous avons comparé ces corrélats à sa perception consciente. L’étude 4 nous a permis de caractériser les substrats de la détection automatique des changements en l'absence de conscience visuelle, mesurée par la négativité de discordance (en anglais visual mismatch negativity : vMMN) chez ML et dans un groupe neurotypique. Nous concluons en proposant la vMMN comme biomarqueur neuronal du traitement inconscient dans la vision normale et altérée indépendante des évaluations comportementales. Grâce à ces procédures, nous avons pu aborder certains débats ouverts dans la littérature sur la vision aveugle et sonder l'existence de voies neurales secondaires soutenant le comportement inconscient. En conclusion, cette thèse propose de combiner les perspectives empiriques et cliniques en utilisant des avancées méthodologiques et de nouvelles méthodes pour comprendre et cibler les substrats neurophysiologiques sous-jacents à la vision aveugle. Il est important de noter que le cadre offert par cette thèse de doctorat pourrait aider les études futures à construire des outils thérapeutiques ciblés efficaces et des stratégies de réhabilitation multimodale.20 to 57% of victims of a cerebrovascular accident (CVA) develop visual deficits that considerably reduce their quality of life. Among the extreme cases of visual deficits, we find cortical blindness (CC) which manifests when the primary visual region (V1) is affected. Until now, there is no approach that induces restoration of visual function and in most cases, plasticity is insufficient to allow spontaneous recovery. Therefore, while sight loss is considered permanent, unconscious yet important functions, known as blindsight, could be of use for visual rehabilitation strategies raising strong interest in cognitive neurosciences. Blindsight is a rare phenomenon that portrays a dissociation between performance and consciousness mainly investigated in case reports. In the first chapter of this thesis, we’ve addressed multiple issues about our comprehension of blindsight and conscious perception. As we argue, such understanding might have a significant influence on clinical rehabilitation patients suffering from CB. Therefore, we propose a unique strategy for visual rehabilitation that uses video game principles to target and potentiate neural mechanisms within the global neuronal workspace framework, which is theoretically explained in study 1 and methodologically described in study 5. In other words, we propose that case reports, in conjunction with improved methodological criteria, might identify the neural substrates that support blindsight and unconscious processing. Thus, the work in this Ph.D. work provided three empirical experiments (studies 2, 3, and 4) that used new standards in electrophysiological analyses as they describe the cases of patients SJ presenting blindsight for affective natural complex scenes and ML presenting blindsight for motion stimuli. In studies 2 and 3, we probed the subcortical and cortical neural substrates supporting SJ’s affective blindsight using MEG as we compared these unconscious correlates to his conscious perception. Study 4 characterizes the substrates of automatic detection of changes in the absence of visual awareness as measured by the visual mismatch negativity (vMMN) in ML and a neurotypical group. We conclude by proposing the vMMN as a neural biomarker of unconscious processing in normal and altered vision independent of behavioral assessments. As a result of these procedures, we were able to address certain open debates in the blindsight literature and probe the existence of secondary neural pathways supporting unconscious behavior. In conclusion, this thesis proposes to combine empirical and clinical perspectives by using methodological advances and novel methods to understand and target the neurophysiological substrates underlying blindsight. Importantly, the framework offered by this doctoral dissertation might help future studies build efficient targeted therapeutic tools and multimodal rehabilitation training

    The time course of language production as revealed by pattern classification of MEG sensor data

    Get PDF
    Language production involves a complex set of computations, from conceptualization to articulation, which are thought to engage cascading neural events in the language network. However, recent neuromagnetic evidence suggests simultaneous meaning-to-speech mapping in picture naming tasks, as indexed by early parallel activation of frontotemporal regions to lexical semantic, phonological, and articulatory information. Here we investigate the time course of word production, asking to what extent such “earliness” is a distinctive property of the associated spatiotemporal dynamics. Using MEG, we recorded the neural signals of 34 human subjects (26 males) overtly naming 134 images from four semantic object categories (animals, foods, tools, clothes). Within each category, we covaried word length, as quantified by the number of syllables contained in a word, and phonological neighborhood density to target lexical and post-lexical phonological/phonetic processes. Multivariate pattern analyses searchlights in sensor space distinguished the stimulus-locked spatiotemporal responses to object categories early on, from 150 to 250 ms after picture onset, whereas word length was decoded in left frontotemporal sensors at 250-350 ms, followed by the latency of phonological neighborhood density (350-450 ms). Our results suggest a progression of neural activity from posterior to anterior language regions for the semantic and phonological/phonetic computations preparing overt speech, thus supporting serial cascading models of word productio
    corecore