4,083 research outputs found
Spectral Representations of One-Homogeneous Functionals
This paper discusses a generalization of spectral representations related to
convex one-homogeneous regularization functionals, e.g. total variation or
-norms. Those functionals serve as a substitute for a Hilbert space
structure (and the related norm) in classical linear spectral transforms, e.g.
Fourier and wavelet analysis. We discuss three meaningful definitions of
spectral representations by scale space and variational methods and prove that
(nonlinear) eigenfunctions of the regularization functionals are indeed atoms
in the spectral representation. Moreover, we verify further useful properties
related to orthogonality of the decomposition and the Parseval identity.
The spectral transform is motivated by total variation and further developed
to higher order variants. Moreover, we show that the approach can recover
Fourier analysis as a special case using an appropriate -type
functional and discuss a coupled sparsity example
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
The Radio Sky at Meter Wavelengths: m-Mode Analysis Imaging with the Owens Valley Long Wavelength Array
A host of new low-frequency radio telescopes seek to measure the 21-cm
transition of neutral hydrogen from the early universe. These telescopes have
the potential to directly probe star and galaxy formation at redshifts , but are limited by the dynamic range they can achieve
against foreground sources of low-frequency radio emission. Consequently, there
is a growing demand for modern, high-fidelity maps of the sky at frequencies
below 200 MHz for use in foreground modeling and removal. We describe a new
widefield imaging technique for drift-scanning interferometers,
Tikhonov-regularized -mode analysis imaging. This technique constructs
images of the entire sky in a single synthesis imaging step with exact
treatment of widefield effects. We describe how the CLEAN algorithm can be
adapted to deconvolve maps generated by -mode analysis imaging. We
demonstrate Tikhonov-regularized -mode analysis imaging using the Owens
Valley Long Wavelength Array (OVRO-LWA) by generating 8 new maps of the sky
north of with 15 arcmin angular resolution, at frequencies
evenly spaced between 36.528 MHz and 73.152 MHz, and 800 mJy/beam thermal
noise. These maps are a 10-fold improvement in angular resolution over existing
full-sky maps at comparable frequencies, which have angular resolutions . Each map is constructed exclusively from interferometric observations
and does not represent the globally averaged sky brightness. Future
improvements will incorporate total power radiometry, improved thermal noise,
and improved angular resolution -- due to the planned expansion of the OVRO-LWA
to 2.6 km baselines. These maps serve as a first step on the path to the use of
more sophisticated foreground filters in 21-cm cosmology incorporating the
measured angular and frequency structure of all foreground contaminants.Comment: 27 pages, 18 figure
2-D Prony-Huang Transform: A New Tool for 2-D Spectral Analysis
This work proposes an extension of the 1-D Hilbert Huang transform for the
analysis of images. The proposed method consists in (i) adaptively decomposing
an image into oscillating parts called intrinsic mode functions (IMFs) using a
mode decomposition procedure, and (ii) providing a local spectral analysis of
the obtained IMFs in order to get the local amplitudes, frequencies, and
orientations. For the decomposition step, we propose two robust 2-D mode
decompositions based on non-smooth convex optimization: a "Genuine 2-D"
approach, that constrains the local extrema of the IMFs, and a "Pseudo 2-D"
approach, which constrains separately the extrema of lines, columns, and
diagonals. The spectral analysis step is based on Prony annihilation property
that is applied on small square patches of the IMFs. The resulting 2-D
Prony-Huang transform is validated on simulated and real data.Comment: 24 pages, 7 figure
- …