1,534 research outputs found
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Nonparametric Modeling of Dynamic Functional Connectivity in fMRI Data
Dynamic functional connectivity (FC) has in recent years become a topic of
interest in the neuroimaging community. Several models and methods exist for
both functional magnetic resonance imaging (fMRI) and electroencephalography
(EEG), and the results point towards the conclusion that FC exhibits dynamic
changes. The existing approaches modeling dynamic connectivity have primarily
been based on time-windowing the data and k-means clustering. We propose a
non-parametric generative model for dynamic FC in fMRI that does not rely on
specifying window lengths and number of dynamic states. Rooted in Bayesian
statistical modeling we use the predictive likelihood to investigate if the
model can discriminate between a motor task and rest both within and across
subjects. We further investigate what drives dynamic states using the model on
the entire data collated across subjects and task/rest. We find that the number
of states extracted are driven by subject variability and preprocessing
differences while the individual states are almost purely defined by either
task or rest. This questions how we in general interpret dynamic FC and points
to the need for more research on what drives dynamic FC.Comment: 8 pages, 1 figure. Presented at the Machine Learning and
Interpretation in Neuroimaging Workshop (MLINI-2015), 2015 (arXiv:1605.04435
Tensor Analysis and Fusion of Multimodal Brain Images
Current high-throughput data acquisition technologies probe dynamical systems
with different imaging modalities, generating massive data sets at different
spatial and temporal resolutions posing challenging problems in multimodal data
fusion. A case in point is the attempt to parse out the brain structures and
networks that underpin human cognitive processes by analysis of different
neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the
multimodal, multi-scale nature of neuroimaging data is well reflected by a
multi-way (tensor) structure where the underlying processes can be summarized
by a relatively small number of components or "atoms". We introduce
Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network
notation in order to analyze these models. These diagrams not only clarify
matrix and tensor EEG and fMRI time/frequency analysis and inverse problems,
but also help understand multimodal fusion via Multiway Partial Least Squares
and Coupled Matrix-Tensor Factorization. We show here, for the first time, that
Granger causal analysis of brain networks is a tensor regression problem, thus
allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI
recordings shows the potential of the methods and suggests their use in other
scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE
Estimating Time-Varying Effective Connectivity in High-Dimensional fMRI Data Using Regime-Switching Factor Models
Recent studies on analyzing dynamic brain connectivity rely on sliding-window
analysis or time-varying coefficient models which are unable to capture both
smooth and abrupt changes simultaneously. Emerging evidence suggests
state-related changes in brain connectivity where dependence structure
alternates between a finite number of latent states or regimes. Another
challenge is inference of full-brain networks with large number of nodes. We
employ a Markov-switching dynamic factor model in which the state-driven
time-varying connectivity regimes of high-dimensional fMRI data are
characterized by lower-dimensional common latent factors, following a
regime-switching process. It enables a reliable, data-adaptive estimation of
change-points of connectivity regimes and the massive dependencies associated
with each regime. We consider the switching VAR to quantity the dynamic
effective connectivity. We propose a three-step estimation procedure: (1)
extracting the factors using principal component analysis (PCA) and (2)
identifying dynamic connectivity states using the factor-based switching vector
autoregressive (VAR) models in a state-space formulation using Kalman filter
and expectation-maximization (EM) algorithm, and (3) constructing the
high-dimensional connectivity metrics for each state based on subspace
estimates. Simulation results show that our proposed estimator outperforms the
K-means clustering of time-windowed coefficients, providing more accurate
estimation of regime dynamics and connectivity metrics in high-dimensional
settings. Applications to analyzing resting-state fMRI data identify dynamic
changes in brain states during rest, and reveal distinct directed connectivity
patterns and modular organization in resting-state networks across different
states.Comment: 21 page
Neural Encoding and Decoding with Deep Learning for Natural Vision
The overarching objective of this work is to bridge neuroscience and artificial intelligence to ultimately build machines that learn, act, and think like humans. In the context of vision, the brain enables humans to readily make sense of the visual world, e.g. recognizing visual objects. Developing human-like machines requires understanding the working principles underlying the human vision. In this dissertation, I ask how the brain encodes and represents dynamic visual information from the outside world, whether brain activity can be directly decoded to reconstruct and categorize what a person is seeing, and whether neuroscience theory can be applied to artificial models to advance computer vision. To address these questions, I used deep neural networks (DNN) to establish encoding and decoding models for describing the relationships between the brain and the visual stimuli. Using the DNN, the encoding models were able to predict the functional magnetic resonance imaging (fMRI) responses throughout the visual cortex given video stimuli; the decoding models were able to reconstruct and categorize the visual stimuli based on fMRI activity. To further advance the DNN model, I have implemented a new bidirectional and recurrent neural network based on the predictive coding theory. As a theory in neuroscience, predictive coding explains the interaction among feedforward, feedback, and recurrent connections. The results showed that this brain-inspired model significantly outperforms feedforward-only DNNs in object recognition. These studies have positive impact on understanding the neural computations under human vision and improving computer vision with the knowledge from neuroscience
Categories and functional units: An infinite hierarchical model for brain activations
We present a model that describes the structure in the responses of different brain areas to a set of stimuli in terms of stimulus categories (clusters of stimuli) and functional units (clusters of voxels). We assume that voxels within a unit respond similarly to all stimuli from the same category, and design a nonparametric hierarchical model to capture inter-subject variability among the units. The model explicitly encodes the relationship between brain activations and fMRI time courses. A variational inference algorithm derived based on the model learns categories, units, and a set of unit-category activation probabilities from data. When applied to data from an fMRI study of object recognition, the method finds meaningful and consistent clusterings of stimuli into categories and voxels into units.National Science Foundation (U.S.) (Grant IIS/CRCNS 0904625)National Science Foundation (U.S.) (CAREER Grant 0642971)McGovern Institute for Brain Research at MIT (Neurotechnology Program Grant)National Institutes of Health (U.S.) (Grant NIBIB NAMIC U54-EB005149)National Institutes of Health (U.S.) (Grant NCRR NAC P41-RR13218
Functional Connectome of the Human Brain with Total Correlation
Recent studies proposed the use of Total Correlation to describe functional connectivity
among brain regions as a multivariate alternative to conventional pairwise measures such as correlation or mutual information. In this work, we build on this idea to infer a large-scale (whole-brain)
connectivity network based on Total Correlation and show the possibility of using this kind of
network as biomarkers of brain alterations. In particular, this work uses Correlation Explanation
(CorEx) to estimate Total Correlation. First, we prove that CorEx estimates of Total Correlation and
clustering results are trustable compared to ground truth values. Second, the inferred large-scale
connectivity network extracted from the more extensive open fMRI datasets is consistent with existing
neuroscience studies, but, interestingly, can estimate additional relations beyond pairwise regions.
And finally, we show how the connectivity graphs based on Total Correlation can also be an effective
tool to aid in the discovery of brain diseases
- …