752 research outputs found
A Unifying Framework for Adaptive Radar Detection in Homogeneous plus Structured Interference-Part II: Detectors Design
This paper deals with the problem of adaptive multidimensional/multichannel
signal detection in homogeneous Gaussian disturbance with unknown covariance
matrix and structured (unknown) deterministic interference. The aforementioned
problem extends the well-known Generalized Multivariate Analysis of Variance
(GMANOVA) tackled in the open literature. In a companion paper, we have
obtained the Maximal Invariant Statistic (MIS) for the problem under
consideration, as an enabling tool for the design of suitable detectors which
possess the Constant False-Alarm Rate (CFAR) property. Herein, we focus on the
development of several theoretically-founded detectors for the problem under
consideration. First, all the considered detectors are shown to be function of
the MIS, thus proving their CFARness property. Secondly, coincidence or
statistical equivalence among some of them in such a general signal model is
proved. Thirdly, strong connections to well-known simpler scenarios found in
adaptive detection literature are established. Finally, simulation results are
provided for a comparison of the proposed receivers.Comment: Submitted for journal publicatio
A novel approach to robust radar detection of range-spread targets
This paper proposes a novel approach to robust radar detection of
range-spread targets embedded in Gaussian noise with unknown covariance matrix.
The idea is to model the useful target echo in each range cell as the sum of a
coherent signal plus a random component that makes the signal-plus-noise
hypothesis more plausible in presence of mismatches. Moreover, an unknown power
of the random components, to be estimated from the observables, is inserted to
optimize the performance when the mismatch is absent. The generalized
likelihood ratio test (GLRT) for the problem at hand is considered. In
addition, a new parametric detector that encompasses the GLRT as a special case
is also introduced and assessed. The performance assessment shows the
effectiveness of the idea also in comparison to natural competitors.Comment: 28 pages, 8 figure
On Time-Reversal Imaging by Statistical Testing
This letter is focused on the design and analysis of computational wideband
time-reversal imaging algorithms, designed to be adaptive with respect to the
noise levels pertaining to the frequencies being employed for scene probing.
These algorithms are based on the concept of cell-by-cell processing and are
obtained as theoretically-founded decision statistics for testing the
hypothesis of single-scatterer presence (absence) at a specific location. These
statistics are also validated in comparison with the maximal invariant
statistic for the proposed problem.Comment: Reduced form accepted in IEEE Signal Processing Letter
Model Order Selection Rules For Covariance Structure Classification
The adaptive classification of the interference covariance matrix structure
for radar signal processing applications is addressed in this paper. This
represents a key issue because many detection architectures are synthesized
assuming a specific covariance structure which may not necessarily coincide
with the actual one due to the joint action of the system and environment
uncertainties. The considered classification problem is cast in terms of a
multiple hypotheses test with some nested alternatives and the theory of Model
Order Selection (MOS) is exploited to devise suitable decision rules. Several
MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria
are adopted and the corresponding merits and drawbacks are discussed. At the
analysis stage, illustrating examples for the probability of correct model
selection are presented showing the effectiveness of the proposed rules
Asymptotic robustness of Kelly's GLRT and Adaptive Matched Filter detector under model misspecification
A fundamental assumption underling any Hypothesis Testing (HT) problem is
that the available data follow the parametric model assumed to derive the test
statistic. Nevertheless, a perfect match between the true and the assumed data
models cannot be achieved in many practical applications. In all these cases,
it is advisable to use a robust decision test, i.e. a test whose statistic
preserves (at least asymptotically) the same probability density function (pdf)
for a suitable set of possible input data models under the null hypothesis.
Building upon the seminal work of Kent (1982), in this paper we investigate the
impact of the model mismatch in a recurring HT problem in radar signal
processing applications: testing the mean of a set of Complex Elliptically
Symmetric (CES) distributed random vectors under a possible misspecified,
Gaussian data model. In particular, by using this general misspecified
framework, a new look to two popular detectors, the Kelly's Generalized
Likelihood Ration Test (GLRT) and the Adaptive Matched Filter (AMF), is
provided and their robustness properties investigated.Comment: ISI World Statistics Congress 2017 (ISI2017), Marrakech, Morocco,
16-21 July 201
Foundational principles for large scale inference: Illustrations through correlation mining
When can reliable inference be drawn in the "Big Data" context? This paper
presents a framework for answering this fundamental question in the context of
correlation mining, with implications for general large scale inference. In
large scale data applications like genomics, connectomics, and eco-informatics
the dataset is often variable-rich but sample-starved: a regime where the
number of acquired samples (statistical replicates) is far fewer than the
number of observed variables (genes, neurons, voxels, or chemical
constituents). Much of recent work has focused on understanding the
computational complexity of proposed methods for "Big Data." Sample complexity
however has received relatively less attention, especially in the setting when
the sample size is fixed, and the dimension grows without bound. To
address this gap, we develop a unified statistical framework that explicitly
quantifies the sample complexity of various inferential tasks. Sampling regimes
can be divided into several categories: 1) the classical asymptotic regime
where the variable dimension is fixed and the sample size goes to infinity; 2)
the mixed asymptotic regime where both variable dimension and sample size go to
infinity at comparable rates; 3) the purely high dimensional asymptotic regime
where the variable dimension goes to infinity and the sample size is fixed.
Each regime has its niche but only the latter regime applies to exa-scale data
dimension. We illustrate this high dimensional framework for the problem of
correlation mining, where it is the matrix of pairwise and partial correlations
among the variables that are of interest. We demonstrate various regimes of
correlation mining based on the unifying perspective of high dimensional
learning rates and sample complexity for different structured covariance models
and different inference tasks
- …