397 research outputs found

    MIMO Radar Target Localization and Performance Evaluation under SIRP Clutter

    Full text link
    Multiple-input multiple-output (MIMO) radar has become a thriving subject of research during the past decades. In the MIMO radar context, it is sometimes more accurate to model the radar clutter as a non-Gaussian process, more specifically, by using the spherically invariant random process (SIRP) model. In this paper, we focus on the estimation and performance analysis of the angular spacing between two targets for the MIMO radar under the SIRP clutter. First, we propose an iterative maximum likelihood as well as an iterative maximum a posteriori estimator, for the target's spacing parameter estimation in the SIRP clutter context. Then we derive and compare various Cram\'er-Rao-like bounds (CRLBs) for performance assessment. Finally, we address the problem of target resolvability by using the concept of angular resolution limit (ARL), and derive an analytical, closed-form expression of the ARL based on Smith's criterion, between two closely spaced targets in a MIMO radar context under SIRP clutter. For this aim we also obtain the non-matrix, closed-form expressions for each of the CRLBs. Finally, we provide numerical simulations to assess the performance of the proposed algorithms, the validity of the derived ARL expression, and to reveal the ARL's insightful properties.Comment: 34 pages, 12 figure

    Mathematical optimization and game theoretic methods for radar networks

    Get PDF
    Radar systems are undoubtedly included in the hall of the most momentous discoveries of the previous century. Although radars were initially used for ship and aircraft detection, nowadays these systems are used in highly diverse fields, expanding from civil aviation, marine navigation and air-defence to ocean surveillance, meteorology and medicine. Recent advances in signal processing and the constant development of computational capabilities led to radar systems with impressive surveillance and tracking characteristics but on the other hand the continuous growth of distributed networks made them susceptible to multisource interference. This thesis aims at addressing vulnerabilities of modern radar networks and further improving their characteristics through the design of signal processing algorithms and by utilizing convex optimization and game theoretic methods. In particular, the problems of beamforming, power allocation, jammer avoidance and uncertainty within the context of multiple-input multiple-output (MIMO) radar networks are addressed. In order to improve the beamforming performance of phased-array and MIMO radars employing two-dimensional arrays of antennas, a hybrid two-dimensional Phased-MIMO radar with fully overlapped subarrays is proposed. The work considers both adaptive (convex optimization, CAPON beamformer) and non-adaptive (conventional) beamforming techniques. The transmit, receive and overall beampatterns of the Phased-MIMO model are compared with the respective beampatterns of the phased-array and the MIMO schemes, proving that the hybrid model provides superior capabilities in beamforming. By incorporating game theoretic techniques in the radar field, various vulnerabilities and problems can be investigated. Hence, a game theoretic power allocation scheme is proposed and a Nash equilibrium analysis for a multistatic MIMO network is performed. A network of radars is considered, organized into multiple clusters, whose primary objective is to minimize their transmission power, while satisfying a certain detection criterion. Since no communication between the clusters is assumed, non-cooperative game theoretic techniques and convex optimization methods are utilized to tackle the power adaptation problem. During the proof of the existence and the uniqueness of the solution, which is also presented, important contributions on the SINR performance and the transmission power of the radars have been derived. Game theory can also been applied to mitigate jammer interference in a radar network. Hence, a competitive power allocation problem for a MIMO radar system in the presence of multiple jammers is investigated. The main objective of the radar network is to minimize the total power emitted by the radars while achieving a specific detection criterion for each of the targets-jammers, while the intelligent jammers have the ability to observe the radar transmission power and consequently decide its jamming power to maximize the interference to the radar system. In this context, convex optimization methods, noncooperative game theoretic techniques and hypothesis testing are incorporated to identify the jammers and to determine the optimal power allocation. Furthermore, a proof of the existence and the uniqueness of the solution is presented. Apart from resource allocation applications, game theory can also address distributed beamforming problems. More specifically, a distributed beamforming and power allocation technique for a radar system in the presence of multiple targets is considered. The primary goal of each radar is to minimize its transmission power while attaining an optimal beamforming strategy and satisfying a certain detection criterion for each of the targets. Initially, a strategic noncooperative game (SNG) is used, where there is no communication between the various radars of the system. Subsequently, a more coordinated game theoretic approach incorporating a pricing mechanism is adopted. Furthermore, a Stackelberg game is formulated by adding a surveillance radar to the system model, which will play the role of the leader, and thus the remaining radars will be the followers. For each one of these games, a proof of the existence and uniqueness of the solution is presented. In the aforementioned game theoretic applications, the radars are considered to know the exact radar cross section (RCS) parameters of the targets and thus the exact channel gains of all players, which may not be feasible in a real system. Therefore, in the last part of this thesis, uncertainty regarding the channel gains among the radars and the targets is introduced, which originates from the RCS fluctuations of the targets. Bayesian game theory provides a framework to address such problems of incomplete information. Hence, a Bayesian game is proposed, where each radar egotistically maximizes its SINR, under a predefined power constraint

    Applications of Compressive Sampling Technique to Radar and Localization

    Get PDF
    During the last decade, the emerging technique of compressive sampling (CS) has become a popular subject in signal processing and sensor systems. In particular, CS breaks through the limits imposed by the Nyquist sampling theory and is able to substantially reduce the huge amount of data generated by different sources. The technique of CS has been successfully applied in signal acquisition, image compression, and data reduction. Although the theory of CS has been investigated for some radar and localization problems, several important questions have not been answered yet. For example, the performance of CS radar in a cluttered environment has not been comprehensively studied. Applying CS to passive radars and electronic warfare receivers is another concern that needs more attention. Also, it is well known that applying this strategy leads to extra computational costs which might be prohibitive in large-sized localization networks. In this chapter, we first discuss the practical issues in the process of implementing CS radars and localization systems. Then, we present some promising and efficient solutions to overcome the arising problems

    Impairments in ground moving target indicator (GMTI) radar

    Get PDF
    Radars on multiple distributed airborne or ground based moving platforms are of increasing interest, since they can be deployed in close proximity to the event under investigation and thus offer remarkable sensing opportunities. Ground moving target indicator (GMTI) detects and localizes moving targets in the presence of ground clutter and other interference sources. Space-time adaptive processing (STAP) implemented with antenna arrays has been a classical approach to clutter cancellation in airborne radar. One of the challenges with STAP is that the minimum detectable velocity (MDV) of targets is a function of the baseline of the antenna array: the larger the baseline (i.e., the narrower the beam), the lower the MDV. Unfortunately, increasing the baseline of a uniform linear array (ULA) entails a commensurate increase in the number of elements. An alternative approach to increasing the resolution of a radar, is to use a large, but sparse, random array. The proliferation of relatively inexpensive autonomous sensing vehicles, such as unmanned airborne systems, raises the question whether is it possible to carry out GMTI by distributed airborne platforms. A major obstacle to implementing distributed GMTI is the synchronization of autonomous moving sensors. For range processing, GMTI processing relies on synchronized sampling of the signals received at the array, while STAP processing requires time, frequency and phase synchronization for beamforming and interference cancellation. Distributed sensors have independent oscillators, which are naturally not synchronized and are each subject to different stochastic phase drift. Each sensor has its own local oscillator, unlike a traditional array in which all sensors are connected to the same local oscillator. Even when tuned to the same frequency, phase errors between the sensors will develop over time, due to phase instabilities. These phase errors affect a distributed STAP system. In this dissertation, a distributed STAP application in which sensors are moving autonomously is envisioned. The problems of tracking, detection for our proposed architecture are of important. The first part focuses on developing a direct tracking approach to multiple targets by distributed radar sensors. A challenging scenario of a distributed multi-input multi-output (MIMO) radar system (as shown above), in which relatively simple moving sensors send observations to a fusion center where most of the baseband processing is performed, is presented. The sensors are assumed to maintain time synchronization, but are not phase synchronized. The conventional approach to localization by distributed sensors is to estimate intermediate parameters from the received signals, for example time delay or the angle of arrival. Subsequently, these parameters are used to deduce the location and velocity of the target(s). These classical localization techniques are referred to as indirect localization. Recently, new techniques have been developed capable of estimating target location directly from signal measurements, without an intermediate estimation step. The objective is to develop a direct tracking algorithm for multiple moving targets. It is aimed to develop a direct tracking algorithm of targets state parameters using widely distributed moving sensors for multiple moving targets. Potential candidate for the tracker include Extended Kalman Filter. In the second part of the dissertation,the effect of phase noise on space-time adaptive processing in general, and spatial processing in particular is studied. A power law model is assumed for the phase noise. It is shown that a composite model with several terms is required to properly model the phase noise. It is further shown that the phase noise has almost linear trajectories. The effect of phase noise on spatial processing is analyzed. Simulation results illustrate the effect of phase noise on degrading the performance in terms of beam pattern and receiver operating characteristics. A STAP application, in which spatial processing is performed (together with Doppler processing) over a coherent processing interval, is envisioned

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks
    • …
    corecore