11,075 research outputs found

    Performance Estimates of the Pseudo-Random Method for Radar Detection

    Full text link
    A performance of the pseudo-random method for the radar detection is analyzed. The radar sends a pseudo-random sequence of length NN, and receives echo from rr targets. We assume the natural assumptions of uniformity on the channel and of the square root cancellation on the noise. Then for rN1δr \leq N^{1-\delta}, where δ>0\delta > 0, the following holds: (i) the probability of detection goes to one, and (ii) the expected number of false targets goes to zero, as NN goes to infinity.Comment: 5 pages, two figures, to appear in Proceedings of ISIT 2014 - IEEE International Symposium on Information Theory, Honolul

    Pairwise likelihood estimation for multivariate mixed Poisson models generated by Gamma intensities

    Get PDF
    Estimating the parameters of multivariate mixed Poisson models is an important problem in image processing applications, especially for active imaging or astronomy. The classical maximum likelihood approach cannot be used for these models since the corresponding masses cannot be expressed in a simple closed form. This paper studies a maximum pairwise likelihood approach to estimate the parameters of multivariate mixed Poisson models when the mixing distribution is a multivariate Gamma distribution. The consistency and asymptotic normality of this estimator are derived. Simulations conducted on synthetic data illustrate these results and show that the proposed estimator outperforms classical estimators based on the method of moments. An application to change detection in low-flux images is also investigated

    Knowledge-Aided STAP Using Low Rank and Geometry Properties

    Full text link
    This paper presents knowledge-aided space-time adaptive processing (KA-STAP) algorithms that exploit the low-rank dominant clutter and the array geometry properties (LRGP) for airborne radar applications. The core idea is to exploit the fact that the clutter subspace is only determined by the space-time steering vectors, {red}{where the Gram-Schmidt orthogonalization approach is employed to compute the clutter subspace. Specifically, for a side-looking uniformly spaced linear array, the} algorithm firstly selects a group of linearly independent space-time steering vectors using LRGP that can represent the clutter subspace. By performing the Gram-Schmidt orthogonalization procedure, the orthogonal bases of the clutter subspace are obtained, followed by two approaches to compute the STAP filter weights. To overcome the performance degradation caused by the non-ideal effects, a KA-STAP algorithm that combines the covariance matrix taper (CMT) is proposed. For practical applications, a reduced-dimension version of the proposed KA-STAP algorithm is also developed. The simulation results illustrate the effectiveness of our proposed algorithms, and show that the proposed algorithms converge rapidly and provide a SINR improvement over existing methods when using a very small number of snapshots.Comment: 16 figures, 12 pages. IEEE Transactions on Aerospace and Electronic Systems, 201

    Asymptotic properties of robust complex covariance matrix estimates

    Full text link
    In many statistical signal processing applications, the estimation of nuisance parameters and parameters of interest is strongly linked to the resulting performance. Generally, these applications deal with complex data. This paper focuses on covariance matrix estimation problems in non-Gaussian environments and particularly, the M-estimators in the context of elliptical distributions. Firstly, this paper extends to the complex case the results of Tyler in [1]. More precisely, the asymptotic distribution of these estimators as well as the asymptotic distribution of any homogeneous function of degree 0 of the M-estimates are derived. On the other hand, we show the improvement of such results on two applications: DOA (directions of arrival) estimation using the MUSIC (MUltiple SIgnal Classification) algorithm and adaptive radar detection based on the ANMF (Adaptive Normalized Matched Filter) test

    Performance Bounds for Parameter Estimation under Misspecified Models: Fundamental findings and applications

    Full text link
    Inferring information from a set of acquired data is the main objective of any signal processing (SP) method. In particular, the common problem of estimating the value of a vector of parameters from a set of noisy measurements is at the core of a plethora of scientific and technological advances in the last decades; for example, wireless communications, radar and sonar, biomedicine, image processing, and seismology, just to name a few. Developing an estimation algorithm often begins by assuming a statistical model for the measured data, i.e. a probability density function (pdf) which if correct, fully characterizes the behaviour of the collected data/measurements. Experience with real data, however, often exposes the limitations of any assumed data model since modelling errors at some level are always present. Consequently, the true data model and the model assumed to derive the estimation algorithm could differ. When this happens, the model is said to be mismatched or misspecified. Therefore, understanding the possible performance loss or regret that an estimation algorithm could experience under model misspecification is of crucial importance for any SP practitioner. Further, understanding the limits on the performance of any estimator subject to model misspecification is of practical interest. Motivated by the widespread and practical need to assess the performance of a mismatched estimator, the goal of this paper is to help to bring attention to the main theoretical findings on estimation theory, and in particular on lower bounds under model misspecification, that have been published in the statistical and econometrical literature in the last fifty years. Secondly, some applications are discussed to illustrate the broad range of areas and problems to which this framework extends, and consequently the numerous opportunities available for SP researchers.Comment: To appear in the IEEE Signal Processing Magazin

    Regularized Covariance Matrix Estimation in Complex Elliptically Symmetric Distributions Using the Expected Likelihood Approach - Part 2: The Under-Sampled Case

    Get PDF
    In the first part of this series of two papers, we extended the expected likelihood approach originally developed in the Gaussian case, to the broader class of complex elliptically symmetric (CES) distributions and complex angular central Gaussian (ACG) distributions. More precisely, we demonstrated that the probability density function (p.d.f.) of the likelihood ratio (LR) for the (unknown) actual scatter matrix \mSigma_{0} does not depend on the latter: it only depends on the density generator for the CES distribution and is distribution-free in the case of ACG distributed data, i.e., it only depends on the matrix dimension MM and the number of independent training samples TT, assuming that TMT \geq M. Additionally, regularized scatter matrix estimates based on the EL methodology were derived. In this second part, we consider the under-sampled scenario (TMT \leq M) which deserves a specific treatment since conventional maximum likelihood estimates do not exist. Indeed, inference about the scatter matrix can only be made in the TT-dimensional subspace spanned by the columns of the data matrix. We extend the results derived under the Gaussian assumption to the CES and ACG class of distributions. Invariance properties of the under-sampled likelihood ratio evaluated at \mSigma_{0} are presented. Remarkably enough, in the ACG case, the p.d.f. of this LR can be written in a rather simple form as a product of beta distributed random variables. The regularized schemes derived in the first part, based on the EL principle, are extended to the under-sampled scenario and assessed through numerical simulations

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks
    corecore