56,468 research outputs found
Spatial Filtering Pipeline Evaluation of Cortically Coupled Computer Vision System for Rapid Serial Visual Presentation
Rapid Serial Visual Presentation (RSVP) is a paradigm that supports the
application of cortically coupled computer vision to rapid image search. In
RSVP, images are presented to participants in a rapid serial sequence which can
evoke Event-related Potentials (ERPs) detectable in their Electroencephalogram
(EEG). The contemporary approach to this problem involves supervised spatial
filtering techniques which are applied for the purposes of enhancing the
discriminative information in the EEG data. In this paper we make two primary
contributions to that field: 1) We propose a novel spatial filtering method
which we call the Multiple Time Window LDA Beamformer (MTWLB) method; 2) we
provide a comprehensive comparison of nine spatial filtering pipelines using
three spatial filtering schemes namely, MTWLB, xDAWN, Common Spatial Pattern
(CSP) and three linear classification methods Linear Discriminant Analysis
(LDA), Bayesian Linear Regression (BLR) and Logistic Regression (LR). Three
pipelines without spatial filtering are used as baseline comparison. The Area
Under Curve (AUC) is used as an evaluation metric in this paper. The results
reveal that MTWLB and xDAWN spatial filtering techniques enhance the
classification performance of the pipeline but CSP does not. The results also
support the conclusion that LR can be effective for RSVP based BCI if
discriminative features are available
NiftyNet: a deep-learning platform for medical imaging
Medical image analysis and computer-assisted intervention problems are
increasingly being addressed with deep-learning-based solutions. Established
deep-learning platforms are flexible but do not provide specific functionality
for medical image analysis and adapting them for this application requires
substantial implementation effort. Thus, there has been substantial duplication
of effort and incompatible infrastructure developed across many research
groups. This work presents the open-source NiftyNet platform for deep learning
in medical imaging. The ambition of NiftyNet is to accelerate and simplify the
development of these solutions, and to provide a common mechanism for
disseminating research outputs for the community to use, adapt and build upon.
NiftyNet provides a modular deep-learning pipeline for a range of medical
imaging applications including segmentation, regression, image generation and
representation learning applications. Components of the NiftyNet pipeline
including data loading, data augmentation, network architectures, loss
functions and evaluation metrics are tailored to, and take advantage of, the
idiosyncracies of medical image analysis and computer-assisted intervention.
NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D
and 3D images and computational graphs by default.
We present 3 illustrative medical image analysis applications built using
NiftyNet: (1) segmentation of multiple abdominal organs from computed
tomography; (2) image regression to predict computed tomography attenuation
maps from brain magnetic resonance images; and (3) generation of simulated
ultrasound images for specified anatomical poses.
NiftyNet enables researchers to rapidly develop and distribute deep learning
solutions for segmentation, regression, image generation and representation
learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge
Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6
figures; Update includes additional applications, updated author list and
formatting for journal submissio
EEG source imaging assists decoding in a face recognition task
EEG based brain state decoding has numerous applications. State of the art
decoding is based on processing of the multivariate sensor space signal,
however evidence is mounting that EEG source reconstruction can assist
decoding. EEG source imaging leads to high-dimensional representations and
rather strong a priori information must be invoked. Recent work by Edelman et
al. (2016) has demonstrated that introduction of a spatially focal source space
representation can improve decoding of motor imagery. In this work we explore
the generality of Edelman et al. hypothesis by considering decoding of face
recognition. This task concerns the differentiation of brain responses to
images of faces and scrambled faces and poses a rather difficult decoding
problem at the single trial level. We implement the pipeline using spatially
focused features and show that this approach is challenged and source imaging
does not lead to an improved decoding. We design a distributed pipeline in
which the classifier has access to brain wide features which in turn does lead
to a 15% reduction in the error rate using source space features. Hence, our
work presents supporting evidence for the hypothesis that source imaging
improves decoding
High-throughput Binding Affinity Calculations at Extreme Scales
Resistance to chemotherapy and molecularly targeted therapies is a major
factor in limiting the effectiveness of cancer treatment. In many cases,
resistance can be linked to genetic changes in target proteins, either
pre-existing or evolutionarily selected during treatment. Key to overcoming
this challenge is an understanding of the molecular determinants of drug
binding. Using multi-stage pipelines of molecular simulations we can gain
insights into the binding free energy and the residence time of a ligand, which
can inform both stratified and personal treatment regimes and drug development.
To support the scalable, adaptive and automated calculation of the binding free
energy on high-performance computing resources, we introduce the High-
throughput Binding Affinity Calculator (HTBAC). HTBAC uses a building block
approach in order to attain both workflow flexibility and performance. We
demonstrate close to perfect weak scaling to hundreds of concurrent multi-stage
binding affinity calculation pipelines. This permits a rapid time-to-solution
that is essentially invariant of the calculation protocol, size of candidate
ligands and number of ensemble simulations. As such, HTBAC advances the state
of the art of binding affinity calculations and protocols
Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science
As the field of data science continues to grow, there will be an
ever-increasing demand for tools that make machine learning accessible to
non-experts. In this paper, we introduce the concept of tree-based pipeline
optimization for automating one of the most tedious parts of machine
learning---pipeline design. We implement an open source Tree-based Pipeline
Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a
series of simulated and real-world benchmark data sets. In particular, we show
that TPOT can design machine learning pipelines that provide a significant
improvement over a basic machine learning analysis while requiring little to no
input nor prior knowledge from the user. We also address the tendency for TPOT
to design overly complex pipelines by integrating Pareto optimization, which
produces compact pipelines without sacrificing classification accuracy. As
such, this work represents an important step toward fully automating machine
learning pipeline design.Comment: 8 pages, 5 figures, preprint to appear in GECCO 2016, edits not yet
made from reviewer comment
- …