1,654 research outputs found
Depth Image Inpainting: Improving Low Rank Matrix Completion with Low Gradient Regularization
We consider the case of inpainting single depth images. Without corresponding
color images, previous or next frames, depth image inpainting is quite
challenging. One natural solution is to regard the image as a matrix and adopt
the low rank regularization just as inpainting color images. However, the low
rank assumption does not make full use of the properties of depth images.
A shallow observation may inspire us to penalize the non-zero gradients by
sparse gradient regularization. However, statistics show that though most
pixels have zero gradients, there is still a non-ignorable part of pixels whose
gradients are equal to 1. Based on this specific property of depth images , we
propose a low gradient regularization method in which we reduce the penalty for
gradient 1 while penalizing the non-zero gradients to allow for gradual depth
changes. The proposed low gradient regularization is integrated with the low
rank regularization into the low rank low gradient approach for depth image
inpainting. We compare our proposed low gradient regularization with sparse
gradient regularization. The experimental results show the effectiveness of our
proposed approach
A Benchmark for Sparse Coding: When Group Sparsity Meets Rank Minimization
Sparse coding has achieved a great success in various image processing tasks.
However, a benchmark to measure the sparsity of image patch/group is missing
since sparse coding is essentially an NP-hard problem. This work attempts to
fill the gap from the perspective of rank minimization. More details please see
the manuscript....Comment: arXiv admin note: text overlap with arXiv:1611.0898
Deep Hyperspectral Prior: Denoising, Inpainting, Super-Resolution
Deep learning algorithms have demonstrated state-of-the-art performance in
various tasks of image restoration. This was made possible through the ability
of CNNs to learn from large exemplar sets. However, the latter becomes an issue
for hyperspectral image processing where datasets commonly consist of just a
few images. In this work, we propose a new approach to denoising, inpainting,
and super-resolution of hyperspectral image data using intrinsic properties of
a CNN without any training. The performance of the given algorithm is shown to
be comparable to the performance of trained networks, while its application is
not restricted by the availability of training data. This work is an extension
of original "deep prior" algorithm to HSI domain and 3D-convolutional networks.Comment: Published in ICCV 2019 Workshop
Robust Matrix Completion via Maximum Correntropy Criterion and Half Quadratic Optimization
Robust matrix completion aims to recover a low-rank matrix from a subset of
noisy entries perturbed by complex noises, where traditional methods for matrix
completion may perform poorly due to utilizing error norm in
optimization. In this paper, we propose a novel and fast robust matrix
completion method based on maximum correntropy criterion (MCC). The correntropy
based error measure is utilized instead of using -based error norm to
improve the robustness to noises. Using the half-quadratic optimization
technique, the correntropy based optimization can be transformed to a weighted
matrix factorization problem. Then, two efficient algorithms are derived,
including alternating minimization based algorithm and alternating gradient
descend based algorithm. The proposed algorithms do not need to calculate
singular value decomposition (SVD) at each iteration. Further, the adaptive
kernel selection strategy is proposed to accelerate the convergence speed as
well as improve the performance. Comparison with existing robust matrix
completion algorithms is provided by simulations, showing that the new methods
can achieve better performance than existing state-of-the-art algorithms
From Group Sparse Coding to Rank Minimization: A Novel Denoising Model for Low-level Image Restoration
Recently, low-rank matrix recovery theory has been emerging as a significant
progress for various image processing problems. Meanwhile, the group sparse
coding (GSC) theory has led to great successes in image restoration (IR)
problem with each group contains low-rank property. In this paper, we propose a
novel low-rank minimization based denoising model for IR tasks under the
perspective of GSC, an important connection between our denoising model and
rank minimization problem has been put forward. To overcome the bias problem
caused by convex nuclear norm minimization (NNM) for rank approximation, a more
generalized and flexible rank relaxation function is employed, namely weighted
nonconvex relaxation. Accordingly, an efficient iteratively-reweighted
algorithm is proposed to handle the resulting minimization problem combing with
the popular L_(1/2) and L_(2/3) thresholding operators. Finally, our proposed
denoising model is applied to IR problems via an alternating direction method
of multipliers (ADMM) strategy. Typical IR experiments on image compressive
sensing (CS), inpainting, deblurring and impulsive noise removal demonstrate
that our proposed method can achieve significantly higher PSNR/FSIM values than
many relevant state-of-the-art methods.Comment: Accepted by Signal Processin
-Regularized Dictionary Learning
Classical dictionary learning methods simply normalize dictionary columns at
each iteration, and the impact of this basic form of regularization on
generalization performance (e.g. compression ratio on new images) is unclear.
Here, we derive a tractable performance measure for dictionaries in compressed
sensing based on the low bound and use it to regularize dictionary
learning problems. We detail numerical experiments on both compression and
inpainting problems and show that this more principled regularization approach
consistently improves reconstruction performance on new images
Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision
A number of results have recently demonstrated the benefits of incorporating
various constraints when training deep architectures in vision and machine
learning. The advantages range from guarantees for statistical generalization
to better accuracy to compression. But support for general constraints within
widely used libraries remains scarce and their broader deployment within many
applications that can benefit from them remains under-explored. Part of the
reason is that Stochastic gradient descent (SGD), the workhorse for training
deep neural networks, does not natively deal with constraints with global scope
very well. In this paper, we revisit a classical first order scheme from
numerical optimization, Conditional Gradients (CG), that has, thus far had
limited applicability in training deep models. We show via rigorous analysis
how various constraints can be naturally handled by modifications of this
algorithm. We provide convergence guarantees and show a suite of immediate
benefits that are possible -- from training ResNets with fewer layers but
better accuracy simply by substituting in our version of CG to faster training
of GANs with 50% fewer epochs in image inpainting applications to provably
better generalization guarantees using efficiently implementable forms of
recently proposed regularizers
The Power of Complementary Regularizers: Image Recovery via Transform Learning and Low-Rank Modeling
Recent works on adaptive sparse and on low-rank signal modeling have
demonstrated their usefulness in various image / video processing applications.
Patch-based methods exploit local patch sparsity, whereas other works apply
low-rankness of grouped patches to exploit image non-local structures. However,
using either approach alone usually limits performance in image reconstruction
or recovery applications. In this work, we propose a simultaneous sparsity and
low-rank model, dubbed STROLLR, to better represent natural images. In order to
fully utilize both the local and non-local image properties, we develop an
image restoration framework using a transform learning scheme with joint
low-rank regularization. The approach owes some of its computational efficiency
and good performance to the use of transform learning for adaptive sparse
representation rather than the popular synthesis dictionary learning
algorithms, which involve approximation of NP-hard sparse coding and expensive
learning steps. We demonstrate the proposed framework in various applications
to image denoising, inpainting, and compressed sensing based magnetic resonance
imaging. Results show promising performance compared to state-of-the-art
competing methods.Comment: 13 pages, 7 figures, submitted to TI
Novel variational model for inpainting in the wavelet domain
Wavelet domain inpainting refers to the process of recovering the missing
coefficients during the image compression or transmission stage. Recently, an
efficient algorithm framework which is called Bregmanized operator splitting
(BOS) was proposed for solving the classical variational model of wavelet
inpainting. However, it is still time-consuming to some extent due to the inner
iteration. In this paper, a novel variational model is established to formulate
this reconstruction problem from the view of image decomposition. Then an
efficient iterative algorithm based on the split-Bregman method is adopted to
calculate an optimal solution, and it is also proved to be convergent. Compared
with the BOS algorithm the proposed algorithm avoids the inner iteration and
hence is more simple. Numerical experiments demonstrate that the proposed
method is very efficient and outperforms the current state-of-the-art methods,
especially in the computational time.Comment: 20page
Unsupervised Deep Context Prediction for Background Foreground Separation
In many advanced video based applications background modeling is a
pre-processing step to eliminate redundant data, for instance in tracking or
video surveillance applications. Over the past years background subtraction is
usually based on low level or hand-crafted features such as raw color
components, gradients, or local binary patterns. The background subtraction
algorithms performance suffer in the presence of various challenges such as
dynamic backgrounds, photometric variations, camera jitters, and shadows. To
handle these challenges for the purpose of accurate background modeling we
propose a unified framework based on the algorithm of image inpainting. It is
an unsupervised visual feature learning hybrid Generative Adversarial algorithm
based on context prediction. We have also presented the solution of random
region inpainting by the fusion of center region inpaiting and random region
inpainting with the help of poisson blending technique. Furthermore we also
evaluated foreground object detection with the fusion of our proposed method
and morphological operations. The comparison of our proposed method with 12
state-of-the-art methods shows its stability in the application of background
estimation and foreground detection.Comment: 17 page
- …