37,367 research outputs found
Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization
As a powerful statistical image modeling technique, sparse representation has
been successfully used in various image restoration applications. The success
of sparse representation owes to the development of l1-norm optimization
techniques, and the fact that natural images are intrinsically sparse in some
domain. The image restoration quality largely depends on whether the employed
sparse domain can represent well the underlying image. Considering that the
contents can vary significantly across different images or different patches in
a single image, we propose to learn various sets of bases from a pre-collected
dataset of example image patches, and then for a given patch to be processed,
one set of bases are adaptively selected to characterize the local sparse
domain. We further introduce two adaptive regularization terms into the sparse
representation framework. First, a set of autoregressive (AR) models are
learned from the dataset of example image patches. The best fitted AR models to
a given patch are adaptively selected to regularize the image local structures.
Second, the image non-local self-similarity is introduced as another
regularization term. In addition, the sparsity regularization parameter is
adaptively estimated for better image restoration performance. Extensive
experiments on image deblurring and super-resolution validate that by using
adaptive sparse domain selection and adaptive regularization, the proposed
method achieves much better results than many state-of-the-art algorithms in
terms of both PSNR and visual perception.Comment: 35 pages. This paper is under review in IEEE TI
Semantic Self-adaptation: Enhancing Generalization with a Single Sample
The lack of out-of-domain generalization is a critical weakness of deep
networks for semantic segmentation. Previous studies relied on the assumption
of a static model, i. e., once the training process is complete, model
parameters remain fixed at test time. In this work, we challenge this premise
with a self-adaptive approach for semantic segmentation that adjusts the
inference process to each input sample. Self-adaptation operates on two levels.
First, it fine-tunes the parameters of convolutional layers to the input image
using consistency regularization. Second, in Batch Normalization layers,
self-adaptation interpolates between the training and the reference
distribution derived from a single test sample. Despite both techniques being
well known in the literature, their combination sets new state-of-the-art
accuracy on synthetic-to-real generalization benchmarks. Our empirical study
suggests that self-adaptation may complement the established practice of model
regularization at training time for improving deep network generalization to
out-of-domain data. Our code and pre-trained models are available at
https://github.com/visinf/self-adaptive.Comment: Published in TMLR (July 2023); OpenReview:
https://openreview.net/forum?id=ILNqQhGbLx; Code:
https://github.com/visinf/self-adaptive; Video: https://youtu.be/s4DG65ic0E
Convolutional Neural Networks with Dynamic Regularization
Regularization is commonly used for alleviating overfitting in machine
learning. For convolutional neural networks (CNNs), regularization methods,
such as DropBlock and Shake-Shake, have illustrated the improvement in the
generalization performance. However, these methods lack a self-adaptive ability
throughout training. That is, the regularization strength is fixed to a
predefined schedule, and manual adjustments are required to adapt to various
network architectures. In this paper, we propose a dynamic regularization
method for CNNs. Specifically, we model the regularization strength as a
function of the training loss. According to the change of the training loss,
our method can dynamically adjust the regularization strength in the training
procedure, thereby balancing the underfitting and overfitting of CNNs. With
dynamic regularization, a large-scale model is automatically regularized by the
strong perturbation, and vice versa. Experimental results show that the
proposed method can improve the generalization capability on off-the-shelf
network architectures and outperform state-of-the-art regularization methods.Comment: 7 pages. Accepted for Publication at IEEE TNNL
A moving mesh method with variable relaxation time
We propose a moving mesh adaptive approach for solving time-dependent partial
differential equations. The motion of spatial grid points is governed by a
moving mesh PDE (MMPDE) in which a mesh relaxation time \tau is employed as a
regularization parameter. Previously reported results on MMPDEs have invariably
employed a constant value of the parameter \tau. We extend this standard
approach by incorporating a variable relaxation time that is calculated
adaptively alongside the solution in order to regularize the mesh appropriately
throughout a computation. We focus on singular problems involving self-similar
blow-up to demonstrate the advantages of using a variable relaxation ime over a
fixed one in terms of accuracy, stability and efficiency.Comment: 21 page
Dynamically Regularized Fast RLS with Application to Echo Cancellation
This paper introduces a dynamically regularized fast recursive least squares (DR-FRLS) adaptive filtering algorithm. Numerically stabilized FRLS algorithms exhibit reliable and fast convergence with low complexity even when the excitation signal is highly self-correlated. FRLS still suffers from instability, however, when the condition number of the implicit excitation sample covariance matrix is very high. DR-FRLS, overcomes this problem with a regularization process which only increases the computational complexity by 50%. The benefits of regularization include: (1) the ability to use small forgetting factors resulting in improved tracking ability and (2) better convergence over the standard regularization technique of noise injection. Also, DR-FRLS allows the degree of regularization to be modified quickly without restarting the algorithm. The application of DR-FRLS to stabilizing the fast affine projection (FAR) algorithm is also discussed
Image Restoration Using Joint Statistical Modeling in Space-Transform Domain
This paper presents a novel strategy for high-fidelity image restoration by
characterizing both local smoothness and nonlocal self-similarity of natural
images in a unified statistical manner. The main contributions are three-folds.
First, from the perspective of image statistics, a joint statistical modeling
(JSM) in an adaptive hybrid space-transform domain is established, which offers
a powerful mechanism of combining local smoothness and nonlocal self-similarity
simultaneously to ensure a more reliable and robust estimation. Second, a new
form of minimization functional for solving image inverse problem is formulated
using JSM under regularization-based framework. Finally, in order to make JSM
tractable and robust, a new Split-Bregman based algorithm is developed to
efficiently solve the above severely underdetermined inverse problem associated
with theoretical proof of convergence. Extensive experiments on image
inpainting, image deblurring and mixed Gaussian plus salt-and-pepper noise
removal applications verify the effectiveness of the proposed algorithm.Comment: 14 pages, 18 figures, 7 Tables, to be published in IEEE Transactions
on Circuits System and Video Technology (TCSVT). High resolution pdf version
and Code can be found at: http://idm.pku.edu.cn/staff/zhangjian/IRJSM
- …