912 research outputs found
Analyse asymptotique de la fonction d'autocorrélation des coefficients des paquets d'ondelettes associés à un processus aléatoire stationnaire au sens large et à bande limitée
Dans ce papier, on analyse la corrélation statistique des coefficients des paquets d'ondelettes associés à un processus aléatoire, stationnaire au sens large et dont la densité spectrale est à support dans [-π;π]. Soient deux filtres miroirs en quadrature (FMQ) dépendant d'un paramètre r tel que ces filtres tendent presque partout vers les FMQ idéaux de Shannon lorsque r croît. Le paramètre r est appelé ordre des filtres FMQ considérés.L'ordre des filtres de Daubechies est le nombre de moments nuls de la fonction d'ondelette. L'ordre des filtres de Battle-Lemarié est l'ordre de la spline associée à la fonction d'échelle. Étant donné un chemin de l'arbre de décomposition en paquets d'ondelettes, nous montrons que les coefficients des paquets d'ondelettes tendent à se décorréler pour chaque paquet associé à un niveau de résolution suffisamment grand, à condition que l'ordre des FMQ soit lui-aussi suffisament grand et supérieur à une valeur qui dépend du paquet d'ondelettes considéré. Une autre conséquence de ce résultat est la suivante : lorsque les coefficients associés à un paquet d'ondelettes sont approximativement décorrélés, la valeur de la fonction d'auto-corrélation en 0 est proche de la valeur de la densité spectrale du processus en un point que l'on sait déterminer. Ce point dépend du chemin suivi dans l'arbre de décomposition pour atteindre le paquet d'ondelettes considéré. Quelques simulation mettent en évidence le bonne qualité de l'effet de blanchiment que l'on obtient en pratique. (english version on the document
An Examination of Some Signi cant Approaches to Statistical Deconvolution
We examine statistical approaches to two significant areas of deconvolution - Blind
Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals.
For BD, we review some major classical and new methods in a unified framework of
nonGaussian signals. The first class of algorithms we look at falls into the class
of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities
between them despite differences in origins and motivations. We give new theoretical
results concerning the behaviour and generality of these algorithms and give evidence
of scenarios where they may fail. In some cases, we present new modifications to the
algorithms to overcome these shortfalls.
Following our discussion on the MED algorithms, we next look at a recently
proposed BD algorithm based on the correntropy function, a function defined as a
combination of the autocorrelation and the entropy functiosn. We examine its BD
performance when compared with MED algorithms. We find that the BD carried
out via correntropy-matching cannot be straightforwardly interpreted as simultaneous
moment-matching due to the breakdown of the correntropy expansion in terms
of moments. Other issues such as maximum/minimum phase ambiguity and computational
complexity suggest that careful attention is required before establishing the
correntropy algorithm as a superior alternative to the existing BD techniques.
For the problem of RD, we give a categorisation of different kinds of uncertainties
encountered in estimation and discuss techniques required to solve each individual
case. Primarily, we tackle the overlooked cases of robustification of deconvolution
filters based on estimated blurring response or estimated signal spectrum. We do
this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified
Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs
to the standard plug-in Wiener Filter (WF)
Quaternion Matrices : Statistical Properties and Applications to Signal Processing and Wavelets
Similarly to how complex numbers provide a possible framework for extending scalar signal processing techniques to 2-channel signals, the 4-dimensional hypercomplex algebra of quaternions can be used to represent signals with 3 or 4 components.
For a quaternion random vector to be suited for quaternion linear processing, it must be (second-order) proper.
We consider the likelihood ratio test (LRT) for propriety, and compute the exact distribution for statistics of Box type, which include this LRT. Various approximate distributions are compared. The Wishart distribution of a quaternion sample covariance matrix is derived from first principles.
Quaternions are isomorphic to an algebra of structured 4x4 real matrices.
This mapping is our main tool, and suggests considering more general real matrix problems as a way of investigating quaternion linear algorithms.
A quaternion vector autoregressive (VAR) time-series model is equivalent to a structured real VAR model. We show that generalised least squares (and Gaussian maximum likelihood) estimation of the parameters reduces to ordinary least squares, but only if the innovations are proper. A LRT is suggested to simultaneously test for quaternion structure in the regression coefficients and innovation covariance.
Matrix-valued wavelets (MVWs) are generalised (multi)wavelets for vector-valued signals. Quaternion wavelets are equivalent to structured MVWs.
Taking into account orthogonal similarity, all MVWs can be constructed from non-trivial MVWs. We show that there are no non-scalar non-trivial MVWs with short support [0,3]. Through symbolic computation we construct the families of shortest non-trivial 2x2 Daubechies MVWs and quaternion Daubechies wavelets.Open Acces
Optimal Uniform Convergence Rates for Sieve Nonparametric Instrumental Variables Regression
We study the problem of nonparametric regression when the regressor is
endogenous, which is an important nonparametric instrumental variables (NPIV)
regression in econometrics and a difficult ill-posed inverse problem with
unknown operator in statistics. We first establish a general upper bound on the
sup-norm (uniform) convergence rate of a sieve estimator, allowing for
endogenous regressors and weakly dependent data. This result leads to the
optimal sup-norm convergence rates for spline and wavelet least squares
regression estimators under weakly dependent data and heavy-tailed error terms.
This upper bound also yields the sup-norm convergence rates for sieve NPIV
estimators under i.i.d. data: the rates coincide with the known optimal
-norm rates for severely ill-posed problems, and are power of
slower than the optimal -norm rates for mildly ill-posed problems. We then
establish the minimax risk lower bound in sup-norm loss, which coincides with
our upper bounds on sup-norm rates for the spline and wavelet sieve NPIV
estimators. This sup-norm rate optimality provides another justification for
the wide application of sieve NPIV estimators. Useful results on
weakly-dependent random matrices are also provided
Gaussian Process Morphable Models
Statistical shape models (SSMs) represent a class of shapes as a normal
distribution of point variations, whose parameters are estimated from example
shapes. Principal component analysis (PCA) is applied to obtain a
low-dimensional representation of the shape variation in terms of the leading
principal components. In this paper, we propose a generalization of SSMs,
called Gaussian Process Morphable Models (GPMMs). We model the shape variations
with a Gaussian process, which we represent using the leading components of its
Karhunen-Loeve expansion. To compute the expansion, we make use of an
approximation scheme based on the Nystrom method. The resulting model can be
seen as a continuous analogon of an SSM. However, while for SSMs the shape
variation is restricted to the span of the example data, with GPMMs we can
define the shape variation using any Gaussian process. For example, we can
build shape models that correspond to classical spline models, and thus do not
require any example data. Furthermore, Gaussian processes make it possible to
combine different models. For example, an SSM can be extended with a spline
model, to obtain a model that incorporates learned shape characteristics, but
is flexible enough to explain shapes that cannot be represented by the SSM. We
introduce a simple algorithm for fitting a GPMM to a surface or image. This
results in a non-rigid registration approach, whose regularization properties
are defined by a GPMM. We show how we can obtain different registration
schemes,including methods for multi-scale, spatially-varying or hybrid
registration, by constructing an appropriate GPMM. As our approach strictly
separates modelling from the fitting process, this is all achieved without
changes to the fitting algorithm. We show the applicability and versatility of
GPMMs on a clinical use case, where the goal is the model-based segmentation of
3D forearm images
Models of statistical self-similarity for signal and image synthesis
Statistical self-similarity of random processes in continuous-domains is defined through invariance of their statistics to time or spatial scaling. In discrete-time, scaling by an arbitrary factor of signals can be accomplished through frequency warping, and statistical self-similarity is defined by the discrete-time continuous-dilation scaling operation. Unlike other self-similarity models mostly relying on characteristics of continuous self-similarity other than scaling, this model provides a way to express discrete-time statistical self-similarity using scaling of discrete-time signals. This dissertation studies the discrete-time self-similarity model based on the new scaling operation, and develops its properties, which reveals relations with other models. Furthermore, it also presents a new self-similarity definition for discrete-time vector processes, and demonstrates synthesis examples for multi-channel network traffic. In two-dimensional spaces, self-similar random fields are of interest in various areas of image processing, since they fit certain types of natural patterns and textures very well. Current treatments of self-similarity in continuous two-dimensional space use a definition that is a direct extension of the 1-D definition. However, most of current discrete-space two-dimensional approaches do not consider scaling but instead are based on ad hoc formulations, for example, digitizing continuous random fields such as fractional Brownian motion. The dissertation demonstrates that the current statistical self-similarity definition in continuous-space is restrictive, and provides an alternative, more general definition. It also provides a formalism for discrete-space statistical self-similarity that depends on a new scaling operator for discrete images. Within the new framework, it is possible to synthesize a wider class of discrete-space self-similar random fields
- …