4 research outputs found
Spatially Directional Predictive Coding for Block-based Compressive Sensing of Natural Images
A novel coding strategy for block-based compressive sens-ing named spatially
directional predictive coding (SDPC) is proposed, which efficiently utilizes
the intrinsic spatial cor-relation of natural images. At the encoder, for each
block of compressive sensing (CS) measurements, the optimal pre-diction is
selected from a set of prediction candidates that are generated by four
designed directional predictive modes. Then, the resulting residual is
processed by scalar quantiza-tion (SQ). At the decoder, the same prediction is
added onto the de-quantized residuals to produce the quantized CS measurements,
which is exploited for CS reconstruction. Experimental results substantiate
significant improvements achieved by SDPC-plus-SQ in rate distortion
performance as compared with SQ alone and DPCM-plus-SQ.Comment: 5 pages, 3 tables, 3 figures, published at IEEE International
Conference on Image Processing (ICIP) 2013 Code Avaiable:
http://idm.pku.edu.cn/staff/zhangjian/SDPC
Adaptive Image Compressive Sensing Using Texture Contrast
The traditional image Compressive Sensing (CS) conducts block-wise sampling with the same sampling rate. However, some blocking artifacts often occur due to the varying block sparsity, leading to a low rate-distortion performance. To suppress these blocking artifacts, we propose to adaptively sample each block according to texture features in this paper. With the maximum gradient in 8-connected region of each pixel, we measure the texture variation of each pixel and then compute the texture contrast of each block. According to the distribution of texture contrast, we adaptively set the sampling rate of each block and finally build an image reconstruction model using these block texture contrasts. Experimental results show that our adaptive sampling scheme improves the rate-distortion performance of image CS compared with the existing adaptive schemes and the reconstructed images by our method achieve better visual quality