11 research outputs found

    Second order scattering descriptors predict fMRI activity due to visual textures

    Get PDF
    Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations. In a functional Magnetic Resonance Imaging (fMRI) experiment we present visual textures to subjects and evaluate the predictive power of these descriptors with respect to the predictive power of simple contour energy - the first scattering layer. We are able to conclude not only that invariant second layer scattering coefficients better encode voxel activity, but also that well predicted voxels need not necessarily lie in known retinotopic regions.Comment: 3nd International Workshop on Pattern Recognition in NeuroImaging (2013

    Hippocampal activity patterns carry information about objects in temporal context

    Get PDF
    The hippocampus is critical for human episodic memory, but its role remains controversial. One fundamental question concerns whether the hippocampus represents specific objects or assigns context-dependent representations to objects. Here, we used multivoxel pattern similarity analysis of fMRI data during retrieval of learned object sequences to systematically investigate hippocampal coding of object and temporal context information. Hippocampal activity patterns carried information about the temporal positions of objects in learned sequences, but not about objects or temporal positions in random sequences. Hippocampal activity patterns differentiated between overlapping object sequences and between temporally adjacent objects that belonged to distinct sequence contexts. Parahippocampal and perirhinal cortex showed different pattern information profiles consistent with coding of temporal position and object information, respectively. These findings are consistent with models proposing that the hippocampus represents objects within specific temporal contexts, a capability that might explain its critical role in episodic memory

    Atlas-based classification algorithms for identification of informative brain regions in fMRI data

    Get PDF
    Multi-voxel pattern analysis (MVPA) has been successfully applied to neuroimaging data due to its larger sensitivity compared to univariate traditional techniques. Although a Searchlight strategy that locally sweeps all voxels in the brain is the most extended approach to assign functional value to different regions of the brain, this method does not offer information about the directionality of the results and it does not allow studying the combined patterns of more distant voxels. In the current study, we examined two different alternatives to searchlight. First, an atlas- based local averaging (ABLA, Schrouff et al., 2013a) method, which computes the relevance of each region of an atlas from the weights obtained by a whole-brain analysis. Second, a Multiple-Kernel Learning (MKL, Rakotomamonjy et al., 2008) approach, which combines different brain regions from an atlas to build a classification model. We evaluated their performance in two different scenarios where differential neural activity between conditions was large vs. small, and employed nine different atlases to assess the influence of diverse brain parcellations. Results show that all methods are able to localize informative regions when differences were large, demonstrating stability in the identification of regions across atlases. Moreover, the sign of the weights reported by these methods provides the sensitivity of multivariate approaches and the directionality of univariate methods. However, in the second context only ABLA localizes informative regions, which indicates that MKL leads to a lower performance when differences between conditions are small. Future studies could improve their results by employing machine learning algorithms to compute individual atlases fit to the brain organization of each participant.Spanish Ministry of Science and Innovation through grant PSI2016-78236-PSpanish Ministry of Economy and Competitiveness through grant BES-2014-06960

    "Task-relevant autoencoding" enhances machine learning for human neuroscience

    Full text link
    In human neuroscience, machine learning can help reveal lower-dimensional neural representations relevant to subjects' behavior. However, state-of-the-art models typically require large datasets to train, so are prone to overfitting on human neuroimaging data that often possess few samples but many input dimensions. Here, we capitalized on the fact that the features we seek in human neuroscience are precisely those relevant to subjects' behavior. We thus developed a Task-Relevant Autoencoder via Classifier Enhancement (TRACE), and tested its ability to extract behaviorally-relevant, separable representations compared to a standard autoencoder, a variational autoencoder, and principal component analysis for two severely truncated machine learning datasets. We then evaluated all models on fMRI data from 59 subjects who observed animals and objects. TRACE outperformed all models nearly unilaterally, showing up to 12% increased classification accuracy and up to 56% improvement in discovering "cleaner", task-relevant representations. These results showcase TRACE's potential for a wide variety of data related to human behavior.Comment: 41 pages, 11 figures, 5 tables including supplemental materia

    Optimizing cognitive neuroscience experiments for separating event- related fMRI BOLD responses in non-randomized alternating designs

    Get PDF
    Functional magnetic resonance imaging (fMRI) has revolutionized human brain research. But there exists a fundamental mismatch between the rapid time course of neural events and the sluggish nature of the fMRI blood oxygen level-dependent (BOLD) signal, which presents special challenges for cognitive neuroscience research. This limitation in the temporal resolution of fMRI puts constraints on the information about brain function that can be obtained with fMRI and also presents methodological challenges. Most notably, when using fMRI to measure neural events occurring closely in time, the BOLD signals may temporally overlap one another. This overlap problem may be exacerbated in complex experimental paradigms (stimuli and tasks) that are designed to manipulate and isolate specific cognitive-neural processes involved in perception, cognition, and action. Optimization strategies to deconvolve overlapping BOLD signals have proven effective in providing separate estimates of BOLD signals from temporally overlapping brain activity, but there remains reduced efficacy of such approaches in many cases. For example, when stimulus events necessarily follow a non-random order, like in trial-by-trial cued attention or working memory paradigms. Our goal is to provide guidance to improve the efficiency with which the underlying responses evoked by one event type can be detected, estimated, and distinguished from other events in designs common in cognitive neuroscience research. We pursue this goal using simulations that model the nonlinear and transient properties of fMRI signals, and which use more realistic models of noise. Our simulations manipulated: (i) Inter-Stimulus-Interval (ISI), (ii) proportion of so-called null events, and (iii) nonlinearities in the BOLD signal due to both cognitive and design parameters. We offer a theoretical framework along with a python toolbox called deconvolve to provide guidance on the optimal design parameters that will be of particular utility when using non-random, alternating event sequences in experimental designs. In addition, though, we also highlight the challenges and limitations in simultaneously optimizing both detection and estimation efficiency of BOLD signals in these common, but complex, cognitive neuroscience designs

    Unbiased Analysis of Item-Specific Multi-Voxel Activation Patterns Across Learning

    Get PDF
    Recent work has highlighted that multi-voxel pattern analysis (MVPA) can be severely biased when BOLD response estimation involves systematic imbalance in model regressor correlations. This problem occurs in situations where trial types of interest are temporally dependent and the associated BOLD activity overlaps. For example, in learning paradigms early and late learning stage trials are inherently ordered. It has been shown empirically that MVPAs assessing consecutive learning stages can be substantially biased especially when stages are closely spaced. Here, we propose a simple technique that ensures zero bias in item-specific multi-voxel activation patterns for consecutive learning stages with stage being defined by the incremental number of individual item occurrences. For the simpler problem, when MVPA is computed irrespective of learning stage over all item occurrences within a trial sequence, our results confirm that a sufficiently large, randomly selected subset of all possible trial sequence permutations ensures convergence to zero bias – but only when different trial sequences are generated for different subjects. However, this does not help to solve the harder problem to obtain bias-free results for learning-related activation patterns regarding consecutive learning stages. Randomization over all item occurrences fails to ensure zero bias when the full trial sequence is retrospectively divided into item occurrences confined to early and late learning stages. To ensure bias-free MVPA of consecutive learning stages, trial-sequence randomization needs to be done separately for each consecutive learning stage

    Dissociating the brain regions involved in processing objective and subjective performance

    Get PDF
    When making perceptual decisions, easier tasks produce higher task accuracy and, naturally, higher confidence levels. We recognize the two distinctive cognitive processes, but it is challenging to judge exactly how decision and confidence processes affect different brain regions. In the current study, I aimed to reveal which brain regions are activated by objective and subjective performance, respectively. The experiment was a 2 x 2 factorial design, where one factor was task difficulty (i.e., Easy and Difficult conditions) and the other was the number dots presented for the visual stimulus (i.e., High and Low conditions). Different from what observed in the pilot test, the main experiment did not dissociate task performance and confidence level in High and Low conditions. Contrast tests revealed different patterns of activation for Easy > Difficult and Low > High comparisons. However, because behavioral responses for decision and confidence were not clearly separated, it is hard to interpret what those activated regions are associated with. Moreover, none of the regions were able to distinguish either task difficulty or confidence level in the planned MVPA analysis. Meanwhile, I found weak dissociation in the behavioral responses between Difficult-High and Difficult-Low conditions. When contrasting the two conditions each other, I found left middle temporal gyrus (MTG) and right SPL were activated more in Difficult-Low condition compared to Difficult-High condition. Importantly, the right SPL cluster was similar to the right SPL observed in Low > High contrast test. The current study was unfortunately not able to draw a strong conclusion about the task performance and confidence level, and the brain regions associated with those cognitive processes. Nevertheless, partial data of the study showed weak dissociation effect. To understand how the brain computes two cognitive processes that are seemingly separable, it is important to create stable conditions where task performance and confidence level are dissociated and investigate how the brain differently associated with those processes.Ph.D

    An object's smell in the multisensory brain : how our senses interact during olfactory object processing

    Get PDF
    Object perception is a remarkable and fundamental cognitive ability that allows us to interpret and interact with the world we are living in. In our everyday life, we constantly perceive objects–mostly without being aware of it and through several senses at the same time. Although it might seem that object perception is accomplished without any effort, the underlying neural mechanisms are anything but simple. How we perceive objects in the world surrounding us is the result of a complex interplay of our senses. The aim of the present thesis was to explore, by means of functional magnetic resonance imaging, how our senses interact when we perceive an object’s smell in a multisensory setting where the amount of sensory stimulation increases, as well as in a unisensory setting where we perceive an object’s smell in isolation. In Study I, we sought to determine whether and how multisensory object information influences the processing of olfactory object information in the posterior piriform cortex (PPC), a region linked to olfactory object encoding. In Study II, we then expanded our search for integration effects during multisensory object perception to the whole brain because previous research has demonstrated that multisensory integration is accomplished by a network of early sensory cortices and higher-order multisensory integration sites. We specifically aimed at determining whether there exist cortical regions that process multisensory object information independent of from which senses and from how many senses the information arises. In Study III, we then sought to unveil how our senses interact during olfactory object perception in a unisensory setting. Other previous studies have shown that even in such unisensory settings, olfactory object processing is not exclusively accomplished by regions within the olfactory system but instead engages a more widespread network of brain regions, such as regions belonging to the visual system. We aimed at determining what this visual engagement represents. That is, whether areas of the brain that are principally concerned with processing visual object information also hold neural representations of olfactory object information, and if so, whether these representations are similar for smells and pictures of the same objects. In Study I we demonstrated that assisting inputs from our senses of vision and hearing increase the processing of olfactory object information in the PPC, and that the more assisting input we receive the more the processing is enhanced. As this enhancement occurred only for matching inputs, it likely reflects integration of multisensory object information. Study II provided evidence for convergence of multisensory object information in form of a non-linear response enhancement in the inferior parietal cortex: activation increased for bimodal compared to unimodal stimulation, and increased even further for trimodal compared to bimodal stimulation. As this multisensory response enhancement occurred independent of the congruency of the incoming signals, it likely reflects a process of relating the incoming sensory information streams to each other. Finally, Study III revealed that regions of the ventral visual object stream are engaged in recognition of an object’s smell and represent olfactory object information in form of distinct neural activation patterns. While the visual system encodes information about both visual and olfactory objects, it appears to keep information from the two sensory modalities separate by representing smells and pictures of objects differently. Taken together, the studies included in this thesis reveal that olfactory object perception is a multisensory process that engages a widespread network of early sensory as well higher-order cortical regions, even if we do not encounter ourselves in a multisensory setting but exclusively perceive an object’s smell

    Isolating Item and Subject Contributions to the Subsequent Memory Effect

    Get PDF
    The subsequent memory effect (SME) refers to the greater brain activation during encoding of subsequently recognized items compared to subsequently forgotten items. Previous literature regarding SME has been primarily focused on identifying the role of specific regions during encoding or factors that potentially modulate the phenomenon. The current dissertation examines the degree to which this phenomenon can be explained by item selection effects; that is, the tendency of some items to be inherently more memorable than others. To estimate the potential contribution of items to SME, I provided participants a fixed set of items during encoding, which allowed me to model item-specific contributions to recognition memory strength ratings using a linear mixed effect (LME) model. Using these item-based estimates, I was then able to isolate two distinct item-related activations during encoding that were linked to item distinctiveness and general item memorability, respectively. However, the residual of the LME model which reflects recognition strength unaccounted for by the items recovered the majority of original areas linked to subsequent recognition. Thus, I conclude that SMEs are largely attributable to encoding-related processes unique to each subject. Nevertheless, proper modeling and statistical control of item-driven effects afforded detection of originally missed encoding-activations and resulted in a SME more robust than the original. Taken together, these findings suggest that the SME reported in the literature is largely independent of the specific items encoded and demonstrates the need for different functional interpretations of item- versus subject-driven SMEs
    corecore