10,316 research outputs found
Scalable multimodal convolutional networks for brain tumour segmentation
Brain tumour segmentation plays a key role in computer-assisted surgery. Deep
neural networks have increased the accuracy of automatic segmentation
significantly, however these models tend to generalise poorly to different
imaging modalities than those for which they have been designed, thereby
limiting their applications. For example, a network architecture initially
designed for brain parcellation of monomodal T1 MRI can not be easily
translated into an efficient tumour segmentation network that jointly utilises
T1, T1c, Flair and T2 MRI. To tackle this, we propose a novel scalable
multimodal deep learning architecture using new nested structures that
explicitly leverage deep features within or across modalities. This aims at
making the early layers of the architecture structured and sparse so that the
final architecture becomes scalable to the number of modalities. We evaluate
the scalable architecture for brain tumour segmentation and give evidence of
its regularisation effect compared to the conventional concatenation approach.Comment: Paper accepted at MICCAI 201
An Evaluation of Deep CNN Baselines for Scene-Independent Person Re-Identification
In recent years, a variety of proposed methods based on deep convolutional
neural networks (CNNs) have improved the state of the art for large-scale
person re-identification (ReID). While a large number of optimizations and
network improvements have been proposed, there has been relatively little
evaluation of the influence of training data and baseline network architecture.
In particular, it is usually assumed either that networks are trained on
labeled data from the deployment location (scene-dependent), or else adapted
with unlabeled data, both of which complicate system deployment. In this paper,
we investigate the feasibility of achieving scene-independent person ReID by
forming a large composite dataset for training. We present an in-depth
comparison of several CNN baseline architectures for both scene-dependent and
scene-independent ReID, across a range of training dataset sizes. We show that
scene-independent ReID can produce leading-edge results, competitive with
unsupervised domain adaption techniques. Finally, we introduce a new dataset
for comparing within-camera and across-camera person ReID.Comment: To be published in 2018 15th Conference on Computer and Robot Vision
(CRV
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
- …