12,257 research outputs found
TensorQuant - A Simulation Toolbox for Deep Neural Network Quantization
Recent research implies that training and inference of deep neural networks
(DNN) can be computed with low precision numerical representations of the
training/test data, weights and gradients without a general loss in accuracy.
The benefit of such compact representations is twofold: they allow a
significant reduction of the communication bottleneck in distributed DNN
training and faster neural network implementations on hardware accelerators
like FPGAs. Several quantization methods have been proposed to map the original
32-bit floating point problem to low-bit representations. While most related
publications validate the proposed approach on a single DNN topology, it
appears to be evident, that the optimal choice of the quantization method and
number of coding bits is topology dependent. To this end, there is no general
theory available, which would allow users to derive the optimal quantization
during the design of a DNN topology. In this paper, we present a quantization
tool box for the TensorFlow framework. TensorQuant allows a transparent
quantization simulation of existing DNN topologies during training and
inference. TensorQuant supports generic quantization methods and allows
experimental evaluation of the impact of the quantization on single layers as
well as on the full topology. In a first series of experiments with
TensorQuant, we show an analysis of fix-point quantizations of popular CNN
topologies
Learning Frequency-Specific Quantization Scaling in VVC for Standard-Compliant Task-driven Image Coding
Today, visual data is often analyzed by a neural network without any human
being involved, which demands for specialized codecs. For standard-compliant
codec adaptations towards certain information sinks, HEVC or VVC provide the
possibility of frequency-specific quantization with scaling lists. This is a
well-known method for the human visual system, where scaling lists are derived
from psycho-visual models. In this work, we employ scaling lists when
performing VVC intra coding for neural networks as information sink. To this
end, we propose a novel data-driven method to obtain optimal scaling lists for
arbitrary neural networks. Experiments with Mask R-CNN as information sink
reveal that coding the Cityscapes dataset with the proposed scaling lists
result in peak bitrate savings of 8.9 % over VVC with constant quantization. By
that, our approach also outperforms scaling lists optimized for the human
visual system. The generated scaling lists can be found under
https://github.com/FAU-LMS/VCM_scaling_lists.Comment: Originally submitted at IEEE ICIP 202
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
- …