236 research outputs found
A Total Fractional-Order Variation Model for Image Restoration with Non-homogeneous Boundary Conditions and its Numerical Solution
To overcome the weakness of a total variation based model for image
restoration, various high order (typically second order) regularization models
have been proposed and studied recently. In this paper we analyze and test a
fractional-order derivative based total -order variation model, which
can outperform the currently popular high order regularization models. There
exist several previous works using total -order variations for image
restoration; however first no analysis is done yet and second all tested
formulations, differing from each other, utilize the zero Dirichlet boundary
conditions which are not realistic (while non-zero boundary conditions violate
definitions of fractional-order derivatives). This paper first reviews some
results of fractional-order derivatives and then analyzes the theoretical
properties of the proposed total -order variational model rigorously.
It then develops four algorithms for solving the variational problem, one based
on the variational Split-Bregman idea and three based on direct solution of the
discretise-optimization problem. Numerical experiments show that, in terms of
restoration quality and solution efficiency, the proposed model can produce
highly competitive results, for smooth images, to two established high order
models: the mean curvature and the total generalized variation.Comment: 26 page
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
TV-Stokes And Its Variants For Image Processing
The total variational minimization with a Stokes constraint, also known as the TV-Stokes model, has been considered as one of the most successful models in image processing, especially in image restoration and sparse-data-based 3D surface reconstruction. This thesis studies the TV-Stokes model and its existing variants, proposes new and more effective variants of the model and their algorithms applied to some of the most interesting image processing problems.
We first review some of the variational models that already exist, in particular the TV-Stokes model and its variants. Common techniques like the augmented Lagrangian and the dual formulation, are also introduced. We then present our models as new variants of the TV-Stokes.
The main focus of the work has been on the sparse surface reconstruction of 3D surfaces. A model (WTR) with a vector fidelity, that is the gradient vector fidelity, has been proposed, applying it to both 3D cartoon design and height map reconstruction. The model employs the second-order total variation minimization, where the curl-free condition is satisfied automatically. Because the model couples both the height and the gradient vector representing the surface in the same minimization, it constructs the surface correctly. A variant of this model is then introduced, which includes a vector matching term. This matching term gives the model capability to accurately represent the shape of a geometry in the reconstruction. Experiments show a significant improvement over the state-of-the-art models, such as the TV model, higher order TV models, and the anisotropic third-order regularization model, when applied to some general applications.
In another work, the thesis generalizes the TV-Stokes model from two dimensions to an arbitrary number of dimensions, introducing a convenient form for the constraint in order it to be extended to higher dimensions.
The thesis explores also the idea of feature accumulation through iterative regularization in another work, introducing a Richardson-like iteration for the TV-Stokes. Thisis then followed by a more general model, a combined model, based on the modified variant of the TV-stokes. The resulting model is found to be equivalent to the well-known TGV model.
The thesis introduces some interesting numerical strategies for the solution of the TV-Stokes model and its variants. Higher order PDEs are turned into inhomogeneous modified Helmholtz equations through transformations. These equations are then solved using the preconditioned conjugate gradients method or the fast Fourier transformation. The thesis proposes a simple but quite general approach to finding closed form solutions to a general L1 minimization problem, and applies it to design algorithms for our models.Doktorgradsavhandlin
A discrete approximation of Blake & Zisserman energy in image denoising and optimal choice of regularization parameters
We consider a multi-scale approach for the discrete approximation of a functional proposed by Bake and Zisserman (BZ) for solving image denoising and segmentation problems. The proposed method is based on simple and effective higher order varia-tional model. It consists of building linear discrete energies family which Γ-converges to the non-linear BZ functional. The key point of the approach is the construction of the diffusion operators in the discrete energies within a finite element adaptive procedure which approximate in the Γ-convergence sense the initial energy including the singular parts. The resulting model preserves the singularities of the image and of its gradient while keeping a simple structure of the underlying PDEs, hence efficient numerical method for solving the problem under consideration. A new point to make this approach work is to deal with constrained optimization problems that we circumvent through a Lagrangian formulation. We present some numerical experiments to show that the proposed approach allows us to detect first and second-order singularities. We also consider and implement to enhance the algorithms and convergence properties, an augmented Lagrangian method using the alternating direction method of Multipliers (ADMM)
Patch-based methods for variational image processing problems
Image Processing problems are notoriously difficult. To name a few of these difficulties, they are usually ill-posed, involve a huge number of unknowns (from one to several per pixel!), and images cannot be considered as the linear superposition of a few physical sources as they contain many different scales and non-linearities. However, if one considers instead of images as a whole small blocks (or patches) inside the pictures, many of these hurdles vanish and problems become much easier to solve, at the cost of increasing again the dimensionality of the data to process. Following the seminal NL-means algorithm in 2005-2006, methods that consider only the visual correlation between patches and ignore their spatial relationship are called non-local methods. While powerful, it is an arduous task to define non-local methods without using heuristic formulations or complex mathematical frameworks. On the other hand, another powerful property has brought global image processing algorithms one step further: it is the sparsity of images in well chosen representation basis. However, this property is difficult to embed naturally in non-local methods, yielding algorithms that are usually inefficient or circonvoluted. In this thesis, we explore alternative approaches to non-locality, with the goals of i) developing universal approaches that can handle local and non-local constraints and ii) leveraging the qualities of both non-locality and sparsity. For the first point, we will see that embedding the patches of an image into a graph-based framework can yield a simple algorithm that can switch from local to non-local diffusion, which we will apply to the problem of large area image inpainting. For the second point, we will first study a fast patch preselection process that is able to group patches according to their visual content. This preselection operator will then serve as input to a social sparsity enforcing operator that will create sparse groups of jointly sparse patches, thus exploiting all the redundancies present in the data, in a simple mathematical framework. Finally, we will study the problem of reconstructing plausible patches from a few binarized measurements. We will show that this task can be achieved in the case of popular binarized image keypoints descriptors, thus demonstrating a potential privacy issue in mobile visual recognition applications, but also opening a promising way to the design and the construction of a new generation of smart cameras
Group-based Sparse Representation for Image Restoration
Traditional patch-based sparse representation modeling of natural images
usually suffer from two problems. First, it has to solve a large-scale
optimization problem with high computational complexity in dictionary learning.
Second, each patch is considered independently in dictionary learning and
sparse coding, which ignores the relationship among patches, resulting in
inaccurate sparse coding coefficients. In this paper, instead of using patch as
the basic unit of sparse representation, we exploit the concept of group as the
basic unit of sparse representation, which is composed of nonlocal patches with
similar structures, and establish a novel sparse representation modeling of
natural images, called group-based sparse representation (GSR). The proposed
GSR is able to sparsely represent natural images in the domain of group, which
enforces the intrinsic local sparsity and nonlocal self-similarity of images
simultaneously in a unified framework. Moreover, an effective self-adaptive
dictionary learning method for each group with low complexity is designed,
rather than dictionary learning from natural images. To make GSR tractable and
robust, a split Bregman based technique is developed to solve the proposed
GSR-driven minimization problem for image restoration efficiently. Extensive
experiments on image inpainting, image deblurring and image compressive sensing
recovery manifest that the proposed GSR modeling outperforms many current
state-of-the-art schemes in both PSNR and visual perception.Comment: 34 pages, 6 tables, 19 figures, to be published in IEEE Transactions
on Image Processing; Project, Code and High resolution PDF version can be
found: http://idm.pku.edu.cn/staff/zhangjian/. arXiv admin note: text overlap
with arXiv:1404.756
Total Variation as a local filter
International audienceIn the Rudin-Osher-Fatemi (ROF) image denoising model, Total Variation (TV) is used as a global regularization term. However, as we observe, the local interactions induced by Total Variation do not propagate much at long distances in practice, so that the ROF model is not far from being a local filter. In this paper, we propose to build a purely local filter by considering the ROF model in a given neighborhood of each pixel. We show that appropriate weights are required to avoid aliasing-like effects, and we provide an explicit convergence criterion for an associated dual minimization algorithm based on Chambolle's work. We study theoretical properties of the obtained local filter, and show that this localization of the ROF model brings an interesting optimization of the bias-variance trade-off, and a strong reduction a ROF drawback called "staircasing effect". We finally present a new denoising algorithm, TV-means, that efficiently combines the idea of local TV-filtering with the non-local means patch-based method
- …