399 research outputs found
Improving Image Restoration with Soft-Rounding
Several important classes of images such as text, barcode and pattern images
have the property that pixels can only take a distinct subset of values. This
knowledge can benefit the restoration of such images, but it has not been
widely considered in current restoration methods. In this work, we describe an
effective and efficient approach to incorporate the knowledge of distinct pixel
values of the pristine images into the general regularized least squares
restoration framework. We introduce a new regularizer that attains zero at the
designated pixel values and becomes a quadratic penalty function in the
intervals between them. When incorporated into the regularized least squares
restoration framework, this regularizer leads to a simple and efficient step
that resembles and extends the rounding operation, which we term as
soft-rounding. We apply the soft-rounding enhanced solution to the restoration
of binary text/barcode images and pattern images with multiple distinct pixel
values. Experimental results show that soft-rounding enhanced restoration
methods achieve significant improvement in both visual quality and quantitative
measures (PSNR and SSIM). Furthermore, we show that this regularizer can also
benefit the restoration of general natural images.Comment: 9 pages, 6 figure
Self-Paced Learning: an Implicit Regularization Perspective
Self-paced learning (SPL) mimics the cognitive mechanism of humans and
animals that gradually learns from easy to hard samples. One key issue in SPL
is to obtain better weighting strategy that is determined by minimizer
function. Existing methods usually pursue this by artificially designing the
explicit form of SPL regularizer. In this paper, we focus on the minimizer
function, and study a group of new regularizer, named self-paced implicit
regularizer that is deduced from robust loss function. Based on the convex
conjugacy theory, the minimizer function for self-paced implicit regularizer
can be directly learned from the latent loss function, while the analytic form
of the regularizer can be even known. A general framework (named SPL-IR) for
SPL is developed accordingly. We demonstrate that the learning procedure of
SPL-IR is associated with latent robust loss functions, thus can provide some
theoretical inspirations for its working mechanism. We further analyze the
relation between SPL-IR and half-quadratic optimization. Finally, we implement
SPL-IR to both supervised and unsupervised tasks, and experimental results
corroborate our ideas and demonstrate the correctness and effectiveness of
implicit regularizers.Comment: 12 pages, 3 figure
- …