2,990 research outputs found
Theoretical Interpretations and Applications of Radial Basis Function Networks
Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains
A proximal iteration for deconvolving Poisson noisy images using sparse representations
We propose an image deconvolution algorithm when the data is contaminated by
Poisson noise. The image to restore is assumed to be sparsely represented in a
dictionary of waveforms such as the wavelet or curvelet transforms. Our key
contributions are: First, we handle the Poisson noise properly by using the
Anscombe variance stabilizing transform leading to a {\it non-linear}
degradation equation with additive Gaussian noise. Second, the deconvolution
problem is formulated as the minimization of a convex functional with a
data-fidelity term reflecting the noise properties, and a non-smooth
sparsity-promoting penalties over the image representation coefficients (e.g.
-norm). Third, a fast iterative backward-forward splitting algorithm is
proposed to solve the minimization problem. We derive existence and uniqueness
conditions of the solution, and establish convergence of the iterative
algorithm. Finally, a GCV-based model selection procedure is proposed to
objectively select the regularization parameter. Experimental results are
carried out to show the striking benefits gained from taking into account the
Poisson statistics of the noise. These results also suggest that using
sparse-domain regularization may be tractable in many deconvolution
applications with Poisson noise such as astronomy and microscopy
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
Tomographic inversion using -norm regularization of wavelet coefficients
We propose the use of regularization in a wavelet basis for the
solution of linearized seismic tomography problems , allowing for the
possibility of sharp discontinuities superimposed on a smoothly varying
background. An iterative method is used to find a sparse solution that
contains no more fine-scale structure than is necessary to fit the data to
within its assigned errors.Comment: 19 pages, 14 figures. Submitted to GJI July 2006. This preprint does
not use GJI style files (which gives wrong received/accepted dates).
Corrected typ
Elastic-Net Regularization in Learning Theory
Within the framework of statistical learning theory we analyze in detail the
so-called elastic-net regularization scheme proposed by Zou and Hastie for the
selection of groups of correlated variables. To investigate on the statistical
properties of this scheme and in particular on its consistency properties, we
set up a suitable mathematical framework. Our setting is random-design
regression where we allow the response variable to be vector-valued and we
consider prediction functions which are linear combination of elements ({\em
features}) in an infinite-dimensional dictionary. Under the assumption that the
regression function admits a sparse representation on the dictionary, we prove
that there exists a particular ``{\em elastic-net representation}'' of the
regression function such that, if the number of data increases, the elastic-net
estimator is consistent not only for prediction but also for variable/feature
selection. Our results include finite-sample bounds and an adaptive scheme to
select the regularization parameter. Moreover, using convex analysis tools, we
derive an iterative thresholding algorithm for computing the elastic-net
solution which is different from the optimization procedure originally proposed
by Zou and HastieComment: 32 pages, 3 figure
- …