235 research outputs found
Improved Total Variation based Image Compressive Sensing Recovery by Nonlocal Regularization
Recently, total variation (TV) based minimization algorithms have achieved
great success in compressive sensing (CS) recovery for natural images due to
its virtue of preserving edges. However, the use of TV is not able to recover
the fine details and textures, and often suffers from undesirable staircase
artifact. To reduce these effects, this letter presents an improved TV based
image CS recovery algorithm by introducing a new nonlocal regularization
constraint into CS optimization problem. The nonlocal regularization is built
on the well known nonlocal means (NLM) filtering and takes advantage of
self-similarity in images, which helps to suppress the staircase effect and
restore the fine details. Furthermore, an efficient augmented Lagrangian based
algorithm is developed to solve the above combined TV and nonlocal
regularization constrained problem. Experimental results demonstrate that the
proposed algorithm achieves significant performance improvements over the
state-of-the-art TV based algorithm in both PSNR and visual perception.Comment: 4 Pages, 1 figures, 3 tables, to be published at IEEE Int. Symposium
of Circuits and Systems (ISCAS) 201
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Distributed Deblurring of Large Images of Wide Field-Of-View
Image deblurring is an economic way to reduce certain degradations (blur and
noise) in acquired images. Thus, it has become essential tool in high
resolution imaging in many applications, e.g., astronomy, microscopy or
computational photography. In applications such as astronomy and satellite
imaging, the size of acquired images can be extremely large (up to gigapixels)
covering wide field-of-view suffering from shift-variant blur. Most of the
existing image deblurring techniques are designed and implemented to work
efficiently on centralized computing system having multiple processors and a
shared memory. Thus, the largest image that can be handle is limited by the
size of the physical memory available on the system. In this paper, we propose
a distributed nonblind image deblurring algorithm in which several connected
processing nodes (with reasonable computational resources) process
simultaneously different portions of a large image while maintaining certain
coherency among them to finally obtain a single crisp image. Unlike the
existing centralized techniques, image deblurring in distributed fashion raises
several issues. To tackle these issues, we consider certain approximations that
trade-offs between the quality of deblurred image and the computational
resources required to achieve it. The experimental results show that our
algorithm produces the similar quality of images as the existing centralized
techniques while allowing distribution, and thus being cost effective for
extremely large images.Comment: 16 pages, 10 figures, submitted to IEEE Trans. on Image Processin
- …