878 research outputs found
Applications of brain imaging methods in driving behaviour research
Applications of neuroimaging methods have substantially contributed to the
scientific understanding of human factors during driving by providing a deeper
insight into the neuro-cognitive aspects of driver brain. This has been
achieved by conducting simulated (and occasionally, field) driving experiments
while collecting driver brain signals of certain types. Here, this sector of
studies is comprehensively reviewed at both macro and micro scales. Different
themes of neuroimaging driving behaviour research are identified and the
findings within each theme are synthesised. The surveyed literature has
reported on applications of four major brain imaging methods. These include
Functional Magnetic Resonance Imaging (fMRI), Electroencephalography (EEG),
Functional Near-Infrared Spectroscopy (fNIRS) and Magnetoencephalography (MEG),
with the first two being the most common methods in this domain. While
collecting driver fMRI signal has been particularly instrumental in studying
neural correlates of intoxicated driving (e.g. alcohol or cannabis) or
distracted driving, the EEG method has been predominantly utilised in relation
to the efforts aiming at development of automatic fatigue/drowsiness detection
systems, a topic to which the literature on neuro-ergonomics of driving
particularly has shown a spike of interest within the last few years. The
survey also reveals that topics such as driver brain activity in semi-automated
settings or the brain activity of drivers with brain injuries or chronic
neurological conditions have by contrast been investigated to a very limited
extent. Further, potential topics in relation to driving behaviour are
identified that could benefit from the adoption of neuroimaging methods in
future studies
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Deep Active Inference for Autonomous Robot Navigation
Active inference is a theory that underpins the way biological agent's
perceive and act in the real world. At its core, active inference is based on
the principle that the brain is an approximate Bayesian inference engine,
building an internal generative model to drive agents towards minimal surprise.
Although this theory has shown interesting results with grounding in cognitive
neuroscience, its application remains limited to simulations with small,
predefined sensor and state spaces.
In this paper, we leverage recent advances in deep learning to build more
complex generative models that can work without a predefined states space.
State representations are learned end-to-end from real-world, high-dimensional
sensory data such as camera frames. We also show that these generative models
can be used to engage in active inference. To the best of our knowledge this is
the first application of deep active inference for a real-world robot
navigation task.Comment: workshop paper at BAICS at ICLR 202
Deep Interpretability Methods for Neuroimaging
Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training.
We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain.
This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial
- …