3,281 research outputs found
Robust Estimation and Wavelet Thresholding in Partial Linear Models
This paper is concerned with a semiparametric partially linear regression
model with unknown regression coefficients, an unknown nonparametric function
for the non-linear component, and unobservable Gaussian distributed random
errors. We present a wavelet thresholding based estimation procedure to
estimate the components of the partial linear model by establishing a
connection between an -penalty based wavelet estimator of the
nonparametric component and Huber's M-estimation of a standard linear model
with outliers. Some general results on the large sample properties of the
estimates of both the parametric and the nonparametric part of the model are
established. Simulations and a real example are used to illustrate the general
results and to compare the proposed methodology with other methods available in
the recent literature
Filter Bank Fusion Frames
In this paper we characterize and construct novel oversampled filter banks
implementing fusion frames. A fusion frame is a sequence of orthogonal
projection operators whose sum can be inverted in a numerically stable way.
When properly designed, fusion frames can provide redundant encodings of
signals which are optimally robust against certain types of noise and erasures.
However, up to this point, few implementable constructions of such frames were
known; we show how to construct them using oversampled filter banks. In this
work, we first provide polyphase domain characterizations of filter bank fusion
frames. We then use these characterizations to construct filter bank fusion
frame versions of discrete wavelet and Gabor transforms, emphasizing those
specific finite impulse response filters whose frequency responses are
well-behaved.Comment: keywords: filter banks, frames, tight, fusion, erasures, polyphas
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Efficient Algorithms for CUR and Interpolative Matrix Decompositions
The manuscript describes efficient algorithms for the computation of the CUR
and ID decompositions. The methods used are based on simple modifications to
the classical truncated pivoted QR decomposition, which means that highly
optimized library codes can be utilized for implementation. For certain
applications, further acceleration can be attained by incorporating techniques
based on randomized projections. Numerical experiments demonstrate advantageous
performance compared to existing techniques for computing CUR factorizations
Projection-Based and Look Ahead Strategies for Atom Selection
In this paper, we improve iterative greedy search algorithms in which atoms
are selected serially over iterations, i.e., one-by-one over iterations. For
serial atom selection, we devise two new schemes to select an atom from a set
of potential atoms in each iteration. The two new schemes lead to two new
algorithms. For both the algorithms, in each iteration, the set of potential
atoms is found using a standard matched filter. In case of the first scheme, we
propose an orthogonal projection strategy that selects an atom from the set of
potential atoms. Then, for the second scheme, we propose a look ahead strategy
such that the selection of an atom in the current iteration has an effect on
the future iterations. The use of look ahead strategy requires a higher
computational resource. To achieve a trade-off between performance and
complexity, we use the two new schemes in cascade and develop a third new
algorithm. Through experimental evaluations, we compare the proposed algorithms
with existing greedy search and convex relaxation algorithms.Comment: sparsity, compressive sensing; IEEE Trans on Signal Processing 201
- …