3,305 research outputs found
Iteratively Training Look-Up Tables for Network Quantization
Operating deep neural networks (DNNs) on devices with limited resources
requires the reduction of their memory as well as computational footprint.
Popular reduction methods are network quantization or pruning, which either
reduce the word length of the network parameters or remove weights from the
network if they are not needed. In this article we discuss a general framework
for network reduction which we call `Look-Up Table Quantization` (LUT-Q). For
each layer, we learn a value dictionary and an assignment matrix to represent
the network weights. We propose a special solver which combines gradient
descent and a one-step k-means update to learn both the value dictionaries and
assignment matrices iteratively. This method is very flexible: by constraining
the value dictionary, many different reduction problems such as non-uniform
network quantization, training of multiplierless networks, network pruning or
simultaneous quantization and pruning can be implemented without changing the
solver. This flexibility of the LUT-Q method allows us to use the same method
to train networks for different hardware capabilities.Comment: Copyright 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Generalized residual vector quantization for large scale data
Vector quantization is an essential tool for tasks involving large scale
data, for example, large scale similarity search, which is crucial for
content-based information retrieval and analysis. In this paper, we propose a
novel vector quantization framework that iteratively minimizes quantization
error. First, we provide a detailed review on a relevant vector quantization
method named \textit{residual vector quantization} (RVQ). Next, we propose
\textit{generalized residual vector quantization} (GRVQ) to further improve
over RVQ. Many vector quantization methods can be viewed as the special cases
of our proposed framework. We evaluate GRVQ on several large scale benchmark
datasets for large scale search, classification and object retrieval. We
compared GRVQ with existing methods in detail. Extensive experiments
demonstrate our GRVQ framework substantially outperforms existing methods in
term of quantization accuracy and computation efficiency.Comment: published on International Conference on Multimedia and Expo 201
Domain-adaptive deep network compression
Deep Neural Networks trained on large datasets can be easily transferred to
new domains with far fewer labeled examples by a process called fine-tuning.
This has the advantage that representations learned in the large source domain
can be exploited on smaller target domains. However, networks designed to be
optimal for the source task are often prohibitively large for the target task.
In this work we address the compression of networks after domain transfer.
We focus on compression algorithms based on low-rank matrix decomposition.
Existing methods base compression solely on learned network weights and ignore
the statistics of network activations. We show that domain transfer leads to
large shifts in network activations and that it is desirable to take this into
account when compressing. We demonstrate that considering activation statistics
when compressing weights leads to a rank-constrained regression problem with a
closed-form solution. Because our method takes into account the target domain,
it can more optimally remove the redundancy in the weights. Experiments show
that our Domain Adaptive Low Rank (DALR) method significantly outperforms
existing low-rank compression techniques. With our approach, the fc6 layer of
VGG19 can be compressed more than 4x more than using truncated SVD alone --
with only a minor or no loss in accuracy. When applied to domain-transferred
networks it allows for compression down to only 5-20% of the original number of
parameters with only a minor drop in performance.Comment: Accepted at ICCV 201
LUT-NN: Empower Efficient Neural Network Inference with Centroid Learning and Table Lookup
On-device Deep Neural Network (DNN) inference consumes significant computing
resources and development efforts. To alleviate that, we propose LUT-NN, the
first system to empower inference by table lookup, to reduce inference cost.
LUT-NN learns the typical features for each operator, named centroid, and
precompute the results for these centroids to save in lookup tables. During
inference, the results of the closest centroids with the inputs can be read
directly from the table, as the approximated outputs without computations.
LUT-NN integrates two major novel techniques: (1) differentiable centroid
learning through backpropagation, which adapts three levels of approximation to
minimize the accuracy impact by centroids; (2) table lookup inference
execution, which comprehensively considers different levels of parallelism,
memory access reduction, and dedicated hardware units for optimal performance.
LUT-NN is evaluated on multiple real tasks, covering image and speech
recognition, and nature language processing. Compared to related work, LUT-NN
improves accuracy by 66% to 92%, achieving similar level with the original
models. LUT-NN reduces the cost at all dimensions, including FLOPs (
16x), model size ( 7x), latency ( 6.8x), memory ( 6.5x), and
power ( 41.7%)
Deep Learning Techniques for Music Generation -- A Survey
This paper is a survey and an analysis of different ways of using deep
learning (deep artificial neural networks) to generate musical content. We
propose a methodology based on five dimensions for our analysis:
Objective - What musical content is to be generated? Examples are: melody,
polyphony, accompaniment or counterpoint. - For what destination and for what
use? To be performed by a human(s) (in the case of a musical score), or by a
machine (in the case of an audio file).
Representation - What are the concepts to be manipulated? Examples are:
waveform, spectrogram, note, chord, meter and beat. - What format is to be
used? Examples are: MIDI, piano roll or text. - How will the representation be
encoded? Examples are: scalar, one-hot or many-hot.
Architecture - What type(s) of deep neural network is (are) to be used?
Examples are: feedforward network, recurrent network, autoencoder or generative
adversarial networks.
Challenge - What are the limitations and open challenges? Examples are:
variability, interactivity and creativity.
Strategy - How do we model and control the process of generation? Examples
are: single-step feedforward, iterative feedforward, sampling or input
manipulation.
For each dimension, we conduct a comparative analysis of various models and
techniques and we propose some tentative multidimensional typology. This
typology is bottom-up, based on the analysis of many existing deep-learning
based systems for music generation selected from the relevant literature. These
systems are described and are used to exemplify the various choices of
objective, representation, architecture, challenge and strategy. The last
section includes some discussion and some prospects.Comment: 209 pages. This paper is a simplified version of the book: J.-P.
Briot, G. Hadjeres and F.-D. Pachet, Deep Learning Techniques for Music
Generation, Computational Synthesis and Creative Systems, Springer, 201
- …