322 research outputs found
Rethinking the Inception Architecture for Computer Vision
Convolutional networks are at the core of most state-of-the-art computer
vision solutions for a wide variety of tasks. Since 2014 very deep
convolutional networks started to become mainstream, yielding substantial gains
in various benchmarks. Although increased model size and computational cost
tend to translate to immediate quality gains for most tasks (as long as enough
labeled data is provided for training), computational efficiency and low
parameter count are still enabling factors for various use cases such as mobile
vision and big-data scenarios. Here we explore ways to scale up networks in
ways that aim at utilizing the added computation as efficiently as possible by
suitably factorized convolutions and aggressive regularization. We benchmark
our methods on the ILSVRC 2012 classification challenge validation set
demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6%
top-5 error for single frame evaluation using a network with a computational
cost of 5 billion multiply-adds per inference and with using less than 25
million parameters. With an ensemble of 4 models and multi-crop evaluation, we
report 3.5% top-5 error on the validation set (3.6% error on the test set) and
17.3% top-1 error on the validation set
Flattening Singular Values of Factorized Convolution for Medical Images
Convolutional neural networks (CNNs) have long been the paradigm of choice
for robust medical image processing (MIP). Therefore, it is crucial to
effectively and efficiently deploy CNNs on devices with different computing
capabilities to support computer-aided diagnosis. Many methods employ
factorized convolutional layers to alleviate the burden of limited
computational resources at the expense of expressiveness. To this end, given
weak medical image-driven CNN model optimization, a Singular value equalization
generalizer-induced Factorized Convolution (SFConv) is proposed to improve the
expressive power of factorized convolutions in MIP models. We first decompose
the weight matrix of convolutional filters into two low-rank matrices to
achieve model reduction. Then minimize the KL divergence between the two
low-rank weight matrices and the uniform distribution, thereby reducing the
number of singular value directions with significant variance. Extensive
experiments on fundus and OCTA datasets demonstrate that our SFConv yields
competitive expressiveness over vanilla convolutions while reducing complexity
Deep SimNets
We present a deep layered architecture that generalizes convolutional neural
networks (ConvNets). The architecture, called SimNets, is driven by two
operators: (i) a similarity function that generalizes inner-product, and (ii) a
log-mean-exp function called MEX that generalizes maximum and average. The two
operators applied in succession give rise to a standard neuron but in "feature
space". The feature spaces realized by SimNets depend on the choice of the
similarity operator. The simplest setting, which corresponds to a convolution,
realizes the feature space of the Exponential kernel, while other settings
realize feature spaces of more powerful kernels (Generalized Gaussian, which
includes as special cases RBF and Laplacian), or even dynamically learned
feature spaces (Generalized Multiple Kernel Learning). As a result, the SimNet
contains a higher abstraction level compared to a traditional ConvNet. We argue
that enhanced expressiveness is important when the networks are small due to
run-time constraints (such as those imposed by mobile applications). Empirical
evaluation validates the superior expressiveness of SimNets, showing a
significant gain in accuracy over ConvNets when computational resources at
run-time are limited. We also show that in large-scale settings, where
computational complexity is less of a concern, the additional capacity of
SimNets can be controlled with proper regularization, yielding accuracies
comparable to state of the art ConvNets
- …