150 research outputs found
Mixed Far-Field and Near-Field Source Localization Algorithm via Sparse Subarrays
Based on a dual-size shift invariance sparse linear array, this paper presents a novel algorithm for the localization of mixed far-field and near-field sources. First, by constructing a cumulant matrix with only direction-of-arrival (DOA) information, the proposed algorithm decouples the DOA estimation from the range estimation. The cumulant-domain quarter-wavelength invariance yields unambiguous estimates of DOAs, which are then used as coarse references to disambiguate the phase ambiguities in fine estimates induced from the larger spatial invariance. Then, based on the estimated DOAs, another cumulant matrix is derived and decoupled to generate unambiguous and cyclically ambiguous estimates of range parameter. According to the coarse range estimation, the types of sources can be identified and the unambiguous fine range estimates of NF sources are obtained after disambiguation. Compared with some existing algorithms, the proposed algorithm enjoys extended array aperture and higher estimation accuracy. Simulation results are given to validate the performance of the proposed algorithm
Independent component analysis for non-standard data structures
Independent component analysis is a classical multivariate tool used for estimating independent sources among collections of mixed signals. However, modern forms of data are typically too complex for the basic theory to adequately handle. In this thesis extensions of independent component analysis to three cases of non-standard data structures are developed: noisy multivariate data, tensor-valued data and multivariate functional data.
In each case we define the corresponding independent component model along with the related assumptions and implications. The proposed estimators are mostly based on the use of kurtosis and its analogues for the considered structures, resulting into functionals of rather unified form, regardless of the type of the data. We prove the Fisher consistencies of the estimators and particular weight is given to their limiting distributions, using which comparisons between the methods are also made.Riippumattomien komponenttien analyysi on moniulotteisen tilastotieteen työkalu,jota käytetään estimoimaan riippumattomia lähdesignaaleja sekoitettujen signaalien joukosta. Modernit havaintoaineistot ovat kuitenkin tyypillisesti rakenteeltaan liian monimutkaisia, jotta niitä voitaisiin lähestyä alan perinteisillä menetelmillä. Tässä väitöskirjatyössä esitellään laajennukset riippumattomien komponenttien analyysin teoriasta kolmelle epästandardille aineiston muodolle: kohinaiselle moniulotteiselle datalle, tensoriarvoiselle datalle ja moniulotteiselle funktionaaliselle datalle.
Kaikissa tapauksissa määriteläään vastaava riippumattomien komponenttien malli oletuksineen ja seurauksineen. Esitellyt estimaattorit pohjautuvat enimmäkseen huipukkuuden ja sen laajennuksien käyttöönottoon ja saatavat funktionaalit ovat analyyttisesti varsin yhtenäisen muotoisia riippumatta aineiston tyypistä. Kaikille estimaattoreille näytetään niiden Fisher-konsistenttisuus ja painotettuna on erityisesti estimaattoreiden rajajakaumat, jotka mahdollistavat teoreettiset vertailut eri menetelmien välillä
Asymptotically Optimal Blind Calibration of Uniform Linear Sensor Arrays for Narrowband Gaussian Signals
An asymptotically optimal blind calibration scheme of uniform linear arrays
for narrowband Gaussian signals is proposed. Rather than taking the direct
Maximum Likelihood (ML) approach for joint estimation of all the unknown model
parameters, which leads to a multi-dimensional optimization problem with no
closed-form solution, we revisit Paulraj and Kailath's (P-K's) classical
approach in exploiting the special (Toeplitz) structure of the observations'
covariance. However, we offer a substantial improvement over P-K's ordinary
Least Squares (LS) estimates by using asymptotic approximations in order to
obtain simple, non-iterative, (quasi-)linear Optimally-Weighted LS (OWLS)
estimates of the sensors gains and phases offsets with asymptotically optimal
weighting, based only on the empirical covariance matrix of the measurements.
Moreover, we prove that our resulting estimates are also asymptotically optimal
w.r.t. the raw data, and can therefore be deemed equivalent to the ML Estimates
(MLE), which are otherwise obtained by joint ML estimation of all the unknown
model parameters. After deriving computationally convenient expressions of the
respective Cram\'er-Rao lower bounds, we also show that our estimates offer
improved performance when applied to non-Gaussian signals (and/or noise) as
quasi-MLE in a similar setting. The optimal performance of our estimates is
demonstrated in simulation experiments, with a considerable improvement
(reaching an order of magnitude and more) in the resulting mean squared errors
w.r.t. P-K's ordinary LS estimates. We also demonstrate the improved accuracy
in a multiple-sources directions-of-arrivals estimation task.Comment: in IEEE Transactions on Signal Processin
Second-order parameter estimation
This work provides a general framework for the design of second-order blind estimators without adopting any
approximation about the observation statistics or the a priori
distribution of the parameters. The proposed solution is obtained
minimizing the estimator variance subject to some constraints on
the estimator bias. The resulting optimal estimator is found to
depend on the observation fourth-order moments that can be calculated
analytically from the known signal model. Unfortunately,
in most cases, the performance of this estimator is severely limited
by the residual bias inherent to nonlinear estimation problems.
To overcome this limitation, the second-order minimum variance
unbiased estimator is deduced from the general solution by assuming
accurate prior information on the vector of parameters.
This small-error approximation is adopted to design iterative
estimators or trackers. It is shown that the associated variance
constitutes the lower bound for the variance of any unbiased
estimator based on the sample covariance matrix.
The paper formulation is then applied to track the angle-of-arrival
(AoA) of multiple digitally-modulated sources by means of
a uniform linear array. The optimal second-order tracker is compared
with the classical maximum likelihood (ML) blind methods
that are shown to be quadratic in the observed data as well. Simulations
have confirmed that the discrete nature of the transmitted
symbols can be exploited to improve considerably the discrimination
of near sources in medium-to-high SNR scenarios.Peer Reviewe
Computer-intensive statistical methods:saddlepoint approximations with applications in bootstrap and robust inference
The saddlepoint approximation was introduced into statistics in 1954 by Henry E. Daniels. This basic result on approximating the density function of the sample mean has been generalized to many situations. The accuracy of this approximation is very good, particularly in the tails of the distribution and for small sample sizes, compared with normal or Edgeworth approximation methods. Before applying saddlepoint approximations to the bootstrap, this thesis will focus on saddlepoint approximations for the distribution of quadratic forms in normal variables and for the distribution of the waiting time in the coupon collector's problem. Both developments illustrate the modern art of statistics relying on the computer and embodying both numeric and analytic approximations. Saddlepoint approximations are extremely accurate in both cases. This is underlined in the first development by means of an extensive study and several applications to nonparametric regression, and in the second by several examples, including the exhaustive bootstrap seen from a collector's point of view. The remaining part of this thesis is devoted to the use of saddlepoint approximations in order to replace the computer-intensive bootstrap. The recent massive increases in computer power have led to an upsurge in interest in computer-intensive statistical methods. The bootstrap is the first computer-intensive method to become widely known. It found an immediate place in statistical theory and, more slowly, in practice. The bootstrap seems to be gaining ground as the method of choice in a number of applied fields, where classical approaches are known to be unreliable, and there is sustained interest from theoreticians in its development. But it is known that, for accurate approximations in the tails, the nonparametric bootstrap requires a large number of replicates of the statistic. As this is time-intensive other methods should be considered. Saddlepoint methods can provide extremely accurate approximations to resampling distributions. As a first step I develop fast saddlepoint approximations to bootstrap distributions that work in the presence of an outlier, using a saddlepoint mixture approximation. Then I look at robust M-estimates of location like Huber's M-estimate of location and its initially MAD scaled version. One peculiarity of the current literature is that saddlepoint methods are often used to approximate the density or distribution functions of bootstrap estimators, rather than related pivots, whereas it is the latter which are more relevant for inference. Hence the aim of the final part of this thesis is to apply saddlepoint approximations to the construction of studentized confidence intervals based on robust M-estimates. As examples I consider the studentized versions of Huber's M-estimate of location, of its initially MAD scaled version and of Huber's proposal 2. In order to make robust inference about a location parameter there are three types of robustness one would like to achieve: robustness of performance for the estimator of location, robustness of validity and robustness of efficiency for the resulting confidence interval method. Hence in the context of studentized bootstrap confidence intervals I investigate these in more detail in order to give recommendations for practical use, underlined by an extensive simulation study
Recommended from our members
Advanced robust non-invasive foetal heart detection techniques during active labour using one pair of transabdominal electrodes
The thesis proposes and evaluates three state-of-the-art signal processing techniques to detect fetal heartbeats within each maternal cardiac cycle, during labour contractions, using only a pair of transabdominal electrodes. The first and second techniques are, namely, the structured third- order cumulant-slice-template matching and the bispectral-contours-template matching for fetal QRS identification, respectively. The third technique is based on the modified and appropriately weighted spectral multiple signal classification (MUSIC) with incorporated covariance matrix for uterine contraction noise-like interfering signals also contaminated with noise. Essentially, two modifications to the standard MUSIC have been developed in order to enhance the performance of the spectral estimator in our applied work. The first modification involves the introduction of an optimised weighting function to the segmented ECG covariance matrix, and is chiefly aimed at enhancing the fetal QRS major spectral peak which occurs at around 30 Hz against the mother QRS major spectral peak usually occurring around 17 Hz and all other noise contributions. Additional optional pseudo-bispectral enhancement to sharpen the maternal and fetal spectral peaks, in particular when the mother and fetal R-waves are temporally coincident, have been achieved. The second modification to the spectral MUSIC is the removal of the unjustified assumption that only white Gaussian noise is present and the incorporation of the actual measured labour uterine contraction covariance matrix in reconfigured subspace analysis. This inevitably leads to the generalised eigenvectors - eigenvalues decomposition modern signal processing. This is now coined the modified, interference incorporated pseudo-spectral MUSIC. The above mentioned first and second techniques are higher-order statistics-based (HOS) and hybrid involving both signal processing and NN classifiers. The third technique is second-order statistics-based (SOS). In all techniques, the removal of signal non-linearity with the aid of non-linear Volterra synthesisers plays a crucial part in the fetal detection integrity.
Accurately assessed fetal heart classification rates as high as 95% have been achieved during labour, thus helping to provide non-invasive transparency to fetal intrapartum welfare. Performance analysis and evaluation processes involved more than 30 critical cases classified as “fetal under stress in labour” recorded in a London hospital database and used both transbadominal ECG electrodes and fetal scalp electrodes. The latter facilitates detection of the instantaneous fetal heart rate which is then used as the Reference Fetal Heart Rate in the assessment of the classification rate of each of the above mentioned techniques. It will be shown that the fetal heartbeats are completely masked by uterine activity and noise artefacts in all the recorded transabdominal maternal ECG signals. The fetal scalp electrode was, therefore, deemed necessary to provide the highest accurate measure of fetal heart functionality (from the hospital viewpoint), and in the assessment of the three non-invasive techniques presented in this thesis. The techniques may also be used during gestation and as early as 10 weeks
- …