54,296 research outputs found
Controlling Model Complexity in Probabilistic Model-Based Dynamic Optimization of Neural Network Structures
A method of simultaneously optimizing both the structure of neural networks
and the connection weights in a single training loop can reduce the enormous
computational cost of neural architecture search. We focus on the probabilistic
model-based dynamic neural network structure optimization that considers the
probability distribution of structure parameters and simultaneously optimizes
both the distribution parameters and connection weights based on gradient
methods. Since the existing algorithm searches for the structures that only
minimize the training loss, this method might find overly complicated
structures. In this paper, we propose the introduction of a penalty term to
control the model complexity of obtained structures. We formulate a penalty
term using the number of weights or units and derive its analytical natural
gradient. The proposed method minimizes the objective function injected the
penalty term based on the stochastic gradient descent. We apply the proposed
method in the unit selection of a fully-connected neural network and the
connection selection of a convolutional neural network. The experimental
results show that the proposed method can control model complexity while
maintaining performance.Comment: Accepted as a conference paper at the 28th International Conference
on Artificial Neural Networks (ICANN 2019). The final authenticated
publication will be available in the Springer Lecture Notes in Computer
Science (LNCS). 13 page
Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search
Recent work in network quantization has substantially reduced the time and
space complexity of neural network inference, enabling their deployment on
embedded and mobile devices with limited computational and memory resources.
However, existing quantization methods often represent all weights and
activations with the same precision (bit-width). In this paper, we explore a
new dimension of the design space: quantizing different layers with different
bit-widths. We formulate this problem as a neural architecture search problem
and propose a novel differentiable neural architecture search (DNAS) framework
to efficiently explore its exponential search space with gradient-based
optimization. Experiments show we surpass the state-of-the-art compression of
ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model
size or 103.9x lower computational cost can still outperform baseline quantized
or even full precision models
- …