31,834 research outputs found
Weighted gradient domain image processing problems and their iterative solutions
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2012-0000913).OAIID:oai:osos.snu.ac.kr:snu2013-01/102/0000004302/1SEQ:1PERF_CD:SNU2013-01EVAL_ITEM_CD:102USER_ID:0000004302ADJUST_YN:NEMP_ID:A072410DEPT_CD:430CITE_RATE:.701FILENAME:첨부된 내역이 없습니다.DEPT_NM:전기·정보공학부EMAIL:[email protected]_YN:YCONFIRM:
BM3D Frames and Variational Image Deblurring
A family of the Block Matching 3-D (BM3D) algorithms for various imaging
problems has been recently proposed within the framework of nonlocal patch-wise
image modeling [1], [2]. In this paper we construct analysis and synthesis
frames, formalizing the BM3D image modeling and use these frames to develop
novel iterative deblurring algorithms. We consider two different formulations
of the deblurring problem: one given by minimization of the single objective
function and another based on the Nash equilibrium balance of two objective
functions. The latter results in an algorithm where the denoising and
deblurring operations are decoupled. The convergence of the developed
algorithms is proved. Simulation experiments show that the decoupled algorithm
derived from the Nash equilibrium formulation demonstrates the best numerical
and visual results and shows superiority with respect to the state of the art
in the field, confirming a valuable potential of BM3D-frames as an advanced
image modeling tool.Comment: Submitted to IEEE Transactions on Image Processing on May 18, 2011.
implementation of the proposed algorithm is available as part of the BM3D
package at http://www.cs.tut.fi/~foi/GCF-BM3
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
The application of compressive sampling to radio astronomy I: Deconvolution
Compressive sampling is a new paradigm for sampling, based on sparseness of
signals or signal representations. It is much less restrictive than
Nyquist-Shannon sampling theory and thus explains and systematises the
widespread experience that methods such as the H\"ogbom CLEAN can violate the
Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution
method for extended sources is introduced. This method can reconstruct both
point sources and extended sources (using the isotropic undecimated wavelet
transform as a basis function for the reconstruction step). We compare this
CS-based deconvolution method with two CLEAN-based deconvolution methods: the
H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best
performance in deconvolving extended sources for both uniform and natural
weighting of the sampled visibilities. Both visual and numerical results of the
comparison are provided.Comment: Published by A&A, Matlab code can be found:
http://code.google.com/p/csra/download
Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography
Total variation (TV) is a powerful regularization method that has been widely
applied in different imaging applications, but is difficult to apply to diffuse
optical tomography (DOT) image reconstruction (inverse problem) due to complex
and unstructured geometries, non-linearity of the data fitting and
regularization terms, and non-differentiability of the regularization term. We
develop several approaches to overcome these difficulties by: i) defining
discrete differential operators for unstructured geometries using both finite
element and graph representations; ii) developing an optimization algorithm
based on the alternating direction method of multipliers (ADMM) for the
non-differentiable and non-linear minimization problem; iii) investigating
isotropic and anisotropic variants of TV regularization, and comparing their
finite element- and graph-based implementations. These approaches are evaluated
on experiments on simulated data and real data acquired from a tissue phantom.
Our results show that both FEM and graph-based TV regularization is able to
accurately reconstruct both sparse and non-sparse distributions without the
over-smoothing effect of Tikhonov regularization and the over-sparsifying
effect of L regularization. The graph representation was found to
out-perform the FEM method for low-resolution meshes, and the FEM method was
found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and
improved clarit
- …