34,354 research outputs found
Learning scale-variant and scale-invariant features for deep image classification
Convolutional Neural Networks (CNNs) require large image corpora to be
trained on classification tasks. The variation in image resolutions, sizes of
objects and patterns depicted, and image scales, hampers CNN training and
performance, because the task-relevant information varies over spatial scales.
Previous work attempting to deal with such scale variations focused on
encouraging scale-invariant CNN representations. However, scale-invariant
representations are incomplete representations of images, because images
contain scale-variant information as well. This paper addresses the combined
development of scale-invariant and scale-variant representations. We propose a
multi- scale CNN method to encourage the recognition of both types of features
and evaluate it on a challenging image classification task involving
task-relevant characteristics at multiple scales. The results show that our
multi-scale CNN outperforms single-scale CNN. This leads to the conclusion that
encouraging the combined development of a scale-invariant and scale-variant
representation in CNNs is beneficial to image recognition performance
Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz
This paper demonstrates a method for tensorizing neural networks based upon
an efficient way of approximating scale invariant quantum states, the
Multi-scale Entanglement Renormalization Ansatz (MERA). We employ MERA as a
replacement for the fully connected layers in a convolutional neural network
and test this implementation on the CIFAR-10 and CIFAR-100 datasets. The
proposed method outperforms factorization using tensor trains, providing
greater compression for the same level of accuracy and greater accuracy for the
same level of compression. We demonstrate MERA layers with 14000 times fewer
parameters and a reduction in accuracy of less than 1% compared to the
equivalent fully connected layers, scaling like O(N).Comment: 8 pages, 2 figure
- …