44,375 research outputs found
Convolutional Neural Fabrics
Despite the success of CNNs, selecting the optimal architecture for a given
task remains an open problem. Instead of aiming to select a single optimal
architecture, we propose a "fabric" that embeds an exponentially large number
of architectures. The fabric consists of a 3D trellis that connects response
maps at different layers, scales, and channels with a sparse homogeneous local
connectivity pattern. The only hyper-parameters of a fabric are the number of
channels and layers. While individual architectures can be recovered as paths,
the fabric can in addition ensemble all embedded architectures together,
sharing their weights where their paths overlap. Parameters can be learned
using standard methods based on back-propagation, at a cost that scales
linearly in the fabric size. We present benchmark results competitive with the
state of the art for image classification on MNIST and CIFAR10, and for
semantic segmentation on the Part Labels dataset.Comment: Corrected typos (In proceedings of NIPS16
Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis
The availability of large-scale annotated image datasets and recent advances
in supervised deep learning methods enable the end-to-end derivation of
representative image features that can impact a variety of image analysis
problems. Such supervised approaches, however, are difficult to implement in
the medical domain where large volumes of labelled data are difficult to obtain
due to the complexity of manual annotation and inter- and intra-observer
variability in label assignment. We propose a new convolutional sparse kernel
network (CSKN), which is a hierarchical unsupervised feature learning framework
that addresses the challenge of learning representative visual features in
medical image analysis domains where there is a lack of annotated training
data. Our framework has three contributions: (i) We extend kernel learning to
identify and represent invariant features across image sub-patches in an
unsupervised manner. (ii) We initialise our kernel learning with a layer-wise
pre-training scheme that leverages the sparsity inherent in medical images to
extract initial discriminative features. (iii) We adapt a multi-scale spatial
pyramid pooling (SPP) framework to capture subtle geometric differences between
learned visual features. We evaluated our framework in medical image retrieval
and classification on three public datasets. Our results show that our CSKN had
better accuracy when compared to other conventional unsupervised methods and
comparable accuracy to methods that used state-of-the-art supervised
convolutional neural networks (CNNs). Our findings indicate that our
unsupervised CSKN provides an opportunity to leverage unannotated big data in
medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional
Sparse Kernel Network for Unsupervised Medical Image Analysis'). The
manuscript is available from following link
(https://doi.org/10.1016/j.media.2019.06.005
Unsupervised Learning of Individuals and Categories from Images
Motivated by the existence of highly selective, sparsely firing cells observed in the human medial temporal lobe (MTL), we present an unsupervised method for learning and recognizing object categories from unlabeled images. In our model, a network of nonlinear neurons learns a sparse representation of its inputs through an unsupervised expectation-maximization process. We show that the application of this strategy to an invariant feature-based description of natural images leads to the development of units displaying sparse, invariant selectivity for particular individuals or image categories much like those observed in the MTL data
Efficient Deep Feature Learning and Extraction via StochasticNets
Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.Comment: 10 pages. arXiv admin note: substantial text overlap with
arXiv:1508.0546
Analysis Dictionary Learning: An Efficient and Discriminative Solution
Discriminative Dictionary Learning (DL) methods have been widely advocated
for image classification problems. To further sharpen their discriminative
capabilities, most state-of-the-art DL methods have additional constraints
included in the learning stages. These various constraints, however, lead to
additional computational complexity. We hence propose an efficient
Discriminative Convolutional Analysis Dictionary Learning (DCADL) method, as a
lower cost Discriminative DL framework, to both characterize the image
structures and refine the interclass structure representations. The proposed
DCADL jointly learns a convolutional analysis dictionary and a universal
classifier, while greatly reducing the time complexity in both training and
testing phases, and achieving a competitive accuracy, thus demonstrating great
performance in many experiments with standard databases.Comment: ICASSP 201
- …