2 research outputs found
Joint group and residual sparse coding for image compressive sensing
Nonlocal self-similarity and group sparsity have been widely utilized in
image compressive sensing (CS). However, when the sampling rate is low, the
internal prior information of degraded images may be not enough for accurate
restoration, resulting in loss of image edges and details. In this paper, we
propose a joint group and residual sparse coding method for CS image recovery
(JGRSC-CS). In the proposed JGRSC-CS, patch group is treated as the basic unit
of sparse coding and two dictionaries (namely internal and external
dictionaries) are applied to exploit the sparse representation of each group
simultaneously. The internal self-adaptive dictionary is used to remove
artifacts, and an external Gaussian Mixture Model (GMM) dictionary, learned
from clean training images, is used to enhance details and texture. To make the
proposed method effective and robust, the split Bregman method is adopted to
reconstruct the whole image. Experimental results manifest the proposed
JGRSC-CS algorithm outperforms existing state-of-the-art methods in both peak
signal to noise ratio (PSNR) and visual quality.Comment: 27 pages, 7 figure
Structure-Preserving Progressive Low-rank Image Completion for Defending Adversarial Attacks
Deep neural networks recognize objects by analyzing local image details and
summarizing their information along the inference layers to derive the final
decision. Because of this, they are prone to adversarial attacks. Small
sophisticated noise in the input images can accumulate along the network
inference path and produce wrong decisions at the network output. On the other
hand, human eyes recognize objects based on their global structure and semantic
cues, instead of local image textures. Because of this, human eyes can still
clearly recognize objects from images which have been heavily damaged by
adversarial attacks. This leads to a very interesting approach for defending
deep neural networks against adversarial attacks. In this work, we propose to
develop a structure-preserving progressive low-rank image completion (SPLIC)
method to remove unneeded texture details from the input images and shift the
bias of deep neural networks towards global object structures and semantic
cues. We formulate the problem into a low-rank matrix completion problem with
progressively smoothed rank functions to avoid local minimums during the
optimization process. Our experimental results demonstrate that the proposed
method is able to successfully remove the insignificant local image details
while preserving important global object structures. On black-box, gray-box,
and white-box attacks, our method outperforms existing defense methods (by up
to 12.6%) and significantly improves the adversarial robustness of the network.Comment: 10 pages, 12 figures, submitted to Journal of Visual Communication
and Image Representatio