12,003 research outputs found
Reweighted nuclear norm regularization: A SPARSEVA approach
The aim of this paper is to develop a method to estimate high order FIR and
ARX models using least squares with re-weighted nuclear norm regularization.
Typically, the choice of the tuning parameter in the reweighting scheme is
computationally expensive, hence we propose the use of the SPARSEVA (SPARSe
Estimation based on a VAlidation criterion) framework to overcome this problem.
Furthermore, we suggest the use of the prediction error criterion (PEC) to
select the tuning parameter in the SPARSEVA algorithm. Numerical examples
demonstrate the veracity of this method which has close ties with the
traditional technique of cross validation, but using much less computations.Comment: This paper is accepted and will be published in The Proceedings of
the 17th IFAC Symposium on System Identification (SYSID 2015), Beijing,
China, 201
Image interpolation using Shearlet based iterative refinement
This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images
Convolutional Deblurring for Natural Imaging
In this paper, we propose a novel design of image deblurring in the form of
one-shot convolution filtering that can directly convolve with naturally
blurred images for restoration. The problem of optical blurring is a common
disadvantage to many imaging applications that suffer from optical
imperfections. Despite numerous deconvolution methods that blindly estimate
blurring in either inclusive or exclusive forms, they are practically
challenging due to high computational cost and low image reconstruction
quality. Both conditions of high accuracy and high speed are prerequisites for
high-throughput imaging platforms in digital archiving. In such platforms,
deblurring is required after image acquisition before being stored, previewed,
or processed for high-level interpretation. Therefore, on-the-fly correction of
such images is important to avoid possible time delays, mitigate computational
expenses, and increase image perception quality. We bridge this gap by
synthesizing a deconvolution kernel as a linear combination of Finite Impulse
Response (FIR) even-derivative filters that can be directly convolved with
blurry input images to boost the frequency fall-off of the Point Spread
Function (PSF) associated with the optical blur. We employ a Gaussian low-pass
filter to decouple the image denoising problem for image edge deblurring.
Furthermore, we propose a blind approach to estimate the PSF statistics for two
Gaussian and Laplacian models that are common in many imaging pipelines.
Thorough experiments are designed to test and validate the efficiency of the
proposed method using 2054 naturally blurred images across six imaging
applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin
Stochastic Analysis of the LMS Algorithm for System Identification with Subspace Inputs
This paper studies the behavior of the low rank LMS adaptive algorithm for the general case in which the input transformation may not capture the exact input subspace. It is shown that the Independence Theory and the independent additive noise model are not applicable to this case. A new theoretical model for the weight mean and fluctuation behaviors is developed which incorporates the correlation between successive data vectors (as opposed to the Independence Theory model). The new theory is applied to a network echo cancellation scheme which uses partial-Haar input vector transformations. Comparison of the new model predictions with Monte Carlo simulations shows good-to-excellent agreement, certainly much better than predicted by the Independence Theory based model available in the literature
- …