1,278 research outputs found
Multi-scale Mining of fMRI data with Hierarchical Structured Sparsity
International audienceInverse inference, or "brain reading", is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Inverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, such as, among others, univariate feature selection, feature agglomeration and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtained by spatially-constrained agglomerative clustering. This approach encodes the spatial structure of the data at different scales into the regularization, which makes the overall prediction procedure more robust to inter-subject variability. The regularization used induces the selection of spatially coherent predictive brain regions simultaneously at different scales. We test our algorithm on real data acquired to study the mental representation of objects, and we show that the proposed algorithm not only delineates meaningful brain regions but yields as well better prediction accuracy than reference methods
Mapping the time-varying functional brain networks in response to naturalistic movie stimuli
One of human brain’s remarkable traits lies in its capacity to dynamically coordinate the activities of multiple brain regions or networks, adapting to an externally changing environment. Studying the dynamic functional brain networks (DFNs) and their role in perception, assessment, and action can significantly advance our comprehension of how the brain responds to patterns of sensory input. Movies provide a valuable tool for studying DFNs, as they offer a naturalistic paradigm that can evoke complex cognitive and emotional experiences through rich multimodal and dynamic stimuli. However, most previous research on DFNs have predominantly concentrated on the resting-state paradigm, investigating the topological structure of temporal dynamic brain networks generated via chosen templates. The dynamic spatial configurations of the functional networks elicited by naturalistic stimuli demand further exploration. In this study, we employed an unsupervised dictionary learning and sparse coding method combing with a sliding window strategy to map and quantify the dynamic spatial patterns of functional brain networks (FBNs) present in naturalistic functional magnetic resonance imaging (NfMRI) data, and further evaluated whether the temporal dynamics of distinct FBNs are aligned to the sensory, cognitive, and affective processes involved in the subjective perception of the movie. The results revealed that movie viewing can evoke complex FBNs, and these FBNs were time-varying with the movie storylines and were correlated with the movie annotations and the subjective ratings of viewing experience. The reliability of DFNs was also validated by assessing the Intra-class coefficient (ICC) among two scanning sessions under the same naturalistic paradigm with a three-month interval. Our findings offer novel insight into comprehending the dynamic properties of FBNs in response to naturalistic stimuli, which could potentially deepen our understanding of the neural mechanisms underlying the brain’s dynamic changes during the processing of visual and auditory stimuli
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Recommended from our members
Synergizing human-machine intelligence: Visualizing, labeling, and mining the electronic health record
We live in a world where data surround us in every aspect of our lives. The key challenge for humans and machines is how we can make better use of such data. Imagine what would happen if you were to have intelligent machines that could give you insight into the data. Insight that will enable you to better 1) reason about, 2) learn, and 3) understand the underlying phenomena that produced the data. The possibilities of combined human-machine intelligence are endless and will impact our lives in ways we can not even imagine today.
Synergistic human-machine intelligence aims to facilitate the analytical reasoning and inference process of humans by creating machines that maximize a human's ability to 1) reason about, 2) learn, and 3) understand large, complex, and heterogeneous data. Combined human-machine intelligence is a powerful symbiosis of mutual benefit, in which we depend on the computational capabilities of the machine for the tasks we are not good at, and the machine requires human intervention for the tasks it performs poorly on.
This relationship provides a compelling alternative to either approach in isolation for solving today's and tomorrow's arising data challenges. In his regard, this dissertation proposes a diverse analytical framework that leverages synergistic human-machine intelligence to maximize a human's ability to better 1) reason about, 2) learn, and 3) understand different biomedical imaging and healthcare data present in the patient's electronic health record (EHR). Correspondingly, we approach the data analyses problem from the 1) visualization, 2) labeling, and 3) mining perspective and demonstrate the efficacy of our analytics on specific application scenarios and various data domains.
In the first part of this dissertation we explore the question how we can build intelligent imaging analytics that are commensurate with human capabilities and constraints, specifically for optimizing data visualization and automated labeling workflows. Our journey starts with heuristic rule-based analytical models that are derived from task-specific human knowledge. From this experience, we move on to data-driven analytics, where we adapt and combine the intelligence of the model based on prior information provided by the human and synthetic knowledge learned from partial data observations. Within this realm, we propose a novel Bayesian transductive Markov random field model that requires minimal human intervention and is able to cope with scarce label information to learn and infer object shapes in complex spatial, multimodal, spatio-temporal, and longitudinal data. We then study the question how machines can learn discriminative object representations from dense human provided label information by investigating learning and inference mechanisms that make use of deep learning architectures. The developed analytics can aid visualization and labeling tasks, which enables the interpretation and quantification of clinically relevant image information.
The second part explores the question how we can build data-driven analytics for exploratory analysis in longitudinal event data that are commensurate with human capabilities and constraints. We propose human-intuitive analytics that enable the representation and discovery of interpretable event patterns to ease knowledge absorption and comprehension of the employed analytics model and the underlying data. We propose a novel doubly-constrained convolutional sparse-coding framework that learns interpretable and shift-invariant latent temporal event patterns. We apply the model to mine complex event data in EHRs. By mapping the event space to heterogeneous patient encounters in the EHR we explore the linkage between healthcare resource utilization (HRU) in relation to disease severity. This linkage may help to better understand how disease specific co-morbidities and their clinical attributes incur different HRU patterns. Such insight helps to characterize the patient's care history, which then enables the comparison against clinical practice guidelines, the discovery of prevailing practices based on common HRU group patterns, and the identification of outliers that might indicate poor patient management
From sequences to cognitive structures : neurocomputational mechanisms
Ph. D. Thesis.Understanding how the brain forms representations of structured information distributed in time is
a challenging neuroscientific endeavour, necessitating computationally and neurobiologically
informed study. Human neuroimaging evidence demonstrates engagement of a fronto-temporal
network, including ventrolateral prefrontal cortex (vlPFC), during language comprehension.
Corresponding regions are engaged when processing dependencies between word-like items in
Artificial Grammar (AG) paradigms. However, the neurocomputations supporting dependency
processing and sequential structure-building are poorly understood. This work aimed to clarify these
processes in humans, integrating behavioural, electrophysiological and computational evidence.
I devised a novel auditory AG task to assess simultaneous learning of dependencies between adjacent
and non-adjacent items, incorporating learning aids including prosody, feedback, delineated
sequence boundaries, staged pre-exposure, and variable intervening items. Behavioural data obtained
in 50 healthy adults revealed strongly bimodal performance despite these cues. Notably, however,
reaction times revealed sensitivity to the grammar even in low performers. Behavioural and
intracranial electrode data was subsequently obtained in 12 neurosurgical patients performing this
task. Despite chance behavioural performance, time- and time-frequency domain
electrophysiological analysis revealed selective responsiveness to sequence grammaticality in regions
including vlPFC. I developed a novel neurocomputational model (VS-BIND: “Vector-symbolic
Sequencing of Binding INstantiating Dependencies”), triangulating evidence to clarify putative
mechanisms in the fronto-temporal language network. I then undertook multivariate analyses on the
AG task neural data, revealing responses compatible with the presence of ordinal codes in vlPFC,
consistent with VS-BIND. I also developed a novel method of causal analysis on multivariate
patterns, representational Granger causality, capable of detecting flow of distinct representations
within the brain. This alluded to top-down transmission of syntactic predictions during the AG task,
from vlPFC to auditory cortex, largely in the opposite direction to stimulus encodings, consistent
with predictive coding accounts. It finally suggested roles for the temporoparietal junction and
frontal operculum during grammaticality processing, congruent with prior literature.
This work provides novel insights into the neurocomputational basis of cognitive structure-building,
generating hypotheses for future study, and potentially contributing to AI and translational efforts.Wellcome
Trust, European Research Counci
- …