17,823 research outputs found
Data Augmentation of Wearable Sensor Data for Parkinson's Disease Monitoring using Convolutional Neural Networks
While convolutional neural networks (CNNs) have been successfully applied to
many challenging classification applications, they typically require large
datasets for training. When the availability of labeled data is limited, data
augmentation is a critical preprocessing step for CNNs. However, data
augmentation for wearable sensor data has not been deeply investigated yet.
In this paper, various data augmentation methods for wearable sensor data are
proposed. The proposed methods and CNNs are applied to the classification of
the motor state of Parkinson's Disease patients, which is challenging due to
small dataset size, noisy labels, and large intra-class variability.
Appropriate augmentation improves the classification performance from 77.54\%
to 86.88\%.Comment: ICMI2017 (oral session
Coronary Artery Centerline Extraction in Cardiac CT Angiography Using a CNN-Based Orientation Classifier
Coronary artery centerline extraction in cardiac CT angiography (CCTA) images
is a prerequisite for evaluation of stenoses and atherosclerotic plaque. We
propose an algorithm that extracts coronary artery centerlines in CCTA using a
convolutional neural network (CNN).
A 3D dilated CNN is trained to predict the most likely direction and radius
of an artery at any given point in a CCTA image based on a local image patch.
Starting from a single seed point placed manually or automatically anywhere in
a coronary artery, a tracker follows the vessel centerline in two directions
using the predictions of the CNN. Tracking is terminated when no direction can
be identified with high certainty.
The CNN was trained using 32 manually annotated centerlines in a training set
consisting of 8 CCTA images provided in the MICCAI 2008 Coronary Artery
Tracking Challenge (CAT08). Evaluation using 24 test images of the CAT08
challenge showed that extracted centerlines had an average overlap of 93.7%
with 96 manually annotated reference centerlines. Extracted centerline points
were highly accurate, with an average distance of 0.21 mm to reference
centerline points. In a second test set consisting of 50 CCTA scans, 5,448
markers in the coronary arteries were used as seed points to extract single
centerlines. This showed strong correspondence between extracted centerlines
and manually placed markers. In a third test set containing 36 CCTA scans,
fully automatic seeding and centerline extraction led to extraction of on
average 92% of clinically relevant coronary artery segments.
The proposed method is able to accurately and efficiently determine the
direction and radius of coronary arteries. The method can be trained with
limited training data, and once trained allows fast automatic or interactive
extraction of coronary artery trees from CCTA images.Comment: Accepted in Medical Image Analysi
SSD: Single Shot MultiBox Detector
We present a method for detecting objects in images using a single deep
neural network. Our approach, named SSD, discretizes the output space of
bounding boxes into a set of default boxes over different aspect ratios and
scales per feature map location. At prediction time, the network generates
scores for the presence of each object category in each default box and
produces adjustments to the box to better match the object shape. Additionally,
the network combines predictions from multiple feature maps with different
resolutions to naturally handle objects of various sizes. Our SSD model is
simple relative to methods that require object proposals because it completely
eliminates proposal generation and subsequent pixel or feature resampling stage
and encapsulates all computation in a single network. This makes SSD easy to
train and straightforward to integrate into systems that require a detection
component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets
confirm that SSD has comparable accuracy to methods that utilize an additional
object proposal step and is much faster, while providing a unified framework
for both training and inference. Compared to other single stage methods, SSD
has much better accuracy, even with a smaller input image size. For input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan
X and for input, SSD achieves 75.1% mAP, outperforming a
comparable state of the art Faster R-CNN model. Code is available at
https://github.com/weiliu89/caffe/tree/ssd .Comment: ECCV 201
Oriented Response Networks
Deep Convolution Neural Networks (DCNNs) are capable of learning
unprecedentedly effective image representations. However, their ability in
handling significant local and global image rotations remains limited. In this
paper, we propose Active Rotating Filters (ARFs) that actively rotate during
convolution and produce feature maps with location and orientation explicitly
encoded. An ARF acts as a virtual filter bank containing the filter itself and
its multiple unmaterialised rotated versions. During back-propagation, an ARF
is collectively updated using errors from all its rotated versions. DCNNs using
ARFs, referred to as Oriented Response Networks (ORNs), can produce
within-class rotation-invariant deep features while maintaining inter-class
discrimination for classification tasks. The oriented response produced by ORNs
can also be used for image and object orientation estimation tasks. Over
multiple state-of-the-art DCNN architectures, such as VGG, ResNet, and STN, we
consistently observe that replacing regular filters with the proposed ARFs
leads to significant reduction in the number of network parameters and
improvement in classification performance. We report the best results on
several commonly used benchmarks.Comment: Accepted in CVPR 2017. Source code available at http://yzhou.work/OR
- …