5,965 research outputs found
Efficient Information Theoretic Clustering on Discrete Lattices
We consider the problem of clustering data that reside on discrete, low
dimensional lattices. Canonical examples for this setting are found in image
segmentation and key point extraction. Our solution is based on a recent
approach to information theoretic clustering where clusters result from an
iterative procedure that minimizes a divergence measure. We replace costly
processing steps in the original algorithm by means of convolutions. These
allow for highly efficient implementations and thus significantly reduce
runtime. This paper therefore bridges a gap between machine learning and signal
processing.Comment: This paper has been presented at the workshop LWA 201
Scalable Compression of Deep Neural Networks
Deep neural networks generally involve some layers with mil- lions of
parameters, making them difficult to be deployed and updated on devices with
limited resources such as mobile phones and other smart embedded systems. In
this paper, we propose a scalable representation of the network parameters, so
that different applications can select the most suitable bit rate of the
network based on their own storage constraints. Moreover, when a device needs
to upgrade to a high-rate network, the existing low-rate network can be reused,
and only some incremental data are needed to be downloaded. We first
hierarchically quantize the weights of a pre-trained deep neural network to
enforce weight sharing. Next, we adaptively select the bits assigned to each
layer given the total bit budget. After that, we retrain the network to
fine-tune the quantized centroids. Experimental results show that our method
can achieve scalable compression with graceful degradation in the performance.Comment: 5 pages, 4 figures, ACM Multimedia 201
- …