23,729 research outputs found
Deep Networks for Compressed Image Sensing
The compressed sensing (CS) theory has been successfully applied to image
compression in the past few years as most image signals are sparse in a certain
domain. Several CS reconstruction models have been recently proposed and
obtained superior performance. However, there still exist two important
challenges within the CS theory. The first one is how to design a sampling
mechanism to achieve an optimal sampling efficiency, and the second one is how
to perform the reconstruction to get the highest quality to achieve an optimal
signal recovery. In this paper, we try to deal with these two problems with a
deep network. First of all, we train a sampling matrix via the network training
instead of using a traditional manually designed one, which is much appropriate
for our deep network based reconstruct process. Then, we propose a deep network
to recover the image, which imitates traditional compressed sensing
reconstruction processes. Experimental results demonstrate that our deep
networks based CS reconstruction method offers a very significant quality
improvement compared against state of the art ones.Comment: This paper has been accepted by the IEEE International Conference on
Multimedia and Expo (ICME) 201
Imaging via Compressive Sampling [Introduction to compressive sampling and recovery via convex programming]
There is an extensive body of literature on image compression, but the central concept is straightforward: we transform the image into an appropriate basis and then code only the important expansion coefficients. The crux is finding a good transform, a problem that has been studied extensively from both a theoretical [14] and practical [25] standpoint. The most notable product of this research is the wavelet transform [9], [16]; switching from sinusoid-based representations to wavelets marked a watershed in image compression and is the essential difference between the classical JPEG [18] and modern JPEG-2000 [22] standards.
Image compression algorithms convert high-resolution images into a relatively small bit streams (while keeping the essential features intact), in effect turning a large digital data set into a substantially smaller one. But is there a way to avoid the large digital data set to begin with? Is there a way we can build the data compression directly into the acquisition? The answer is yes, and is what compressive sampling (CS) is all about
- …