6,180 research outputs found

    A Unifying Framework for Adaptive Radar Detection in Homogeneous plus Structured Interference-Part II: Detectors Design

    Full text link
    This paper deals with the problem of adaptive multidimensional/multichannel signal detection in homogeneous Gaussian disturbance with unknown covariance matrix and structured (unknown) deterministic interference. The aforementioned problem extends the well-known Generalized Multivariate Analysis of Variance (GMANOVA) tackled in the open literature. In a companion paper, we have obtained the Maximal Invariant Statistic (MIS) for the problem under consideration, as an enabling tool for the design of suitable detectors which possess the Constant False-Alarm Rate (CFAR) property. Herein, we focus on the development of several theoretically-founded detectors for the problem under consideration. First, all the considered detectors are shown to be function of the MIS, thus proving their CFARness property. Secondly, coincidence or statistical equivalence among some of them in such a general signal model is proved. Thirdly, strong connections to well-known simpler scenarios found in adaptive detection literature are established. Finally, simulation results are provided for a comparison of the proposed receivers.Comment: Submitted for journal publicatio

    A novel approach to robust radar detection of range-spread targets

    Full text link
    This paper proposes a novel approach to robust radar detection of range-spread targets embedded in Gaussian noise with unknown covariance matrix. The idea is to model the useful target echo in each range cell as the sum of a coherent signal plus a random component that makes the signal-plus-noise hypothesis more plausible in presence of mismatches. Moreover, an unknown power of the random components, to be estimated from the observables, is inserted to optimize the performance when the mismatch is absent. The generalized likelihood ratio test (GLRT) for the problem at hand is considered. In addition, a new parametric detector that encompasses the GLRT as a special case is also introduced and assessed. The performance assessment shows the effectiveness of the idea also in comparison to natural competitors.Comment: 28 pages, 8 figure

    Asymptotic robustness of Kelly's GLRT and Adaptive Matched Filter detector under model misspecification

    Full text link
    A fundamental assumption underling any Hypothesis Testing (HT) problem is that the available data follow the parametric model assumed to derive the test statistic. Nevertheless, a perfect match between the true and the assumed data models cannot be achieved in many practical applications. In all these cases, it is advisable to use a robust decision test, i.e. a test whose statistic preserves (at least asymptotically) the same probability density function (pdf) for a suitable set of possible input data models under the null hypothesis. Building upon the seminal work of Kent (1982), in this paper we investigate the impact of the model mismatch in a recurring HT problem in radar signal processing applications: testing the mean of a set of Complex Elliptically Symmetric (CES) distributed random vectors under a possible misspecified, Gaussian data model. In particular, by using this general misspecified framework, a new look to two popular detectors, the Kelly's Generalized Likelihood Ration Test (GLRT) and the Adaptive Matched Filter (AMF), is provided and their robustness properties investigated.Comment: ISI World Statistics Congress 2017 (ISI2017), Marrakech, Morocco, 16-21 July 201

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Classification of airborne laser scanning point clouds based on binomial logistic regression analysis

    Get PDF
    This article presents a newly developed procedure for the classification of airborne laser scanning (ALS) point clouds, based on binomial logistic regression analysis. By using a feature space containing a large number of adaptable geometrical parameters, this new procedure can be applied to point clouds covering different types of topography and variable point densities. Besides, the procedure can be adapted to different user requirements. A binomial logistic model is estimated for all a priori defined classes, using a training set of manually classified points. For each point, a value is calculated defining the probability that this point belongs to a certain class. The class with the highest probability will be used for the final point classification. Besides, the use of statistical methods enables a thorough model evaluation by the implementation of well-founded inference criteria. If necessary, the interpretation of these inference analyses also enables the possible definition of more sub-classes. The use of a large number of geometrical parameters is an important advantage of this procedure in comparison with current classification algorithms. It allows more user modifications for the large variety of types of ALS point clouds, while still achieving comparable classification results. It is indeed possible to evaluate parameters as degrees of freedom and remove or add parameters as a function of the type of study area. The performance of this procedure is successfully demonstrated by classifying two different ALS point sets from an urban and a rural area. Moreover, the potential of the proposed classification procedure is explored for terrestrial data

    Nonparametric sequential detection

    Get PDF
    This dissertation extends the theory of the Wilcoxon-Mann-Whitney U statistic so that this statistic can be used to perform sequential tests on hypotheses. This sequential test procedure makes use of a sequential ranking procedure similar to the one first introduced by Parent. The operating-characteristic function and average number of samples function for this new test are calculated as a function of the signal to noise ratio. The test is then shown to be efficient for several forms of alternatives with an efficiency of 95% against the Wald Sequential Probability Ratio Test for a constant signal in normal noise. Finally, the test procedure is modified so that it is capable of making measurements on the channel in order to adapt itself to changes in the channel characteristics. Simulation results are presented to show that this adaptive detector can operate with low probability of error --Abstract, page i
    • …
    corecore