3,858 research outputs found

    Sparsity Promoting Off-grid Methods with Applications in Direction Finding

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Electrical/Computer Engineering. Advisor: Mostafa Kaveh. 1 computer file (PDF); x, 99 pages.In this dissertation, the problem of directions-of-arrival (DoA) estimation is studied by the compressed sensing application of sparsity-promoting regularization techniques. Compressed sensing can recover high-dimensional signals with a sparse representation from very few linear measurements by nonlinear optimization. By exploiting the sparse representation for the multiple measurement vectors or the spatial covariance matrix of correlated or uncorrelated sources, the DoA estimation problem can be formulated in the framework of sparse signal recovery with high resolution. There are three main topics covered in this dissertation. These topics are recovery methods for the sparse model with structured perturbations, continuous sparse recovery methods in the super-resolution framework, and the off-grid DoA estimation with array self-calibration. These topics are summarized below. For the first topic, structured perturbation in the sparse model is considered. A major limitation of most methods exploiting sparse spectral models for the purpose of estimating directions-of-arrival stems from the fixed model dictionary that is formed by array response vectors over a discrete search grid of possible directions. In general, the array responses to actual DoAs will most likely not be members of such a dictionary. Thus, the sparse spectral signal model with uncertainty of linearized dictionary parameter mismatch is considered, and the dictionary matrix is reformulated into a multiplication of a fixed base dictionary and a sparse matrix. Based on this sparse model, we propose several convex optimization algorithms. However, we are also concerned with the development of a computationally efficient optimization algorithm for off-grid direction finding using a sparse observation model. With an emphasis on designing efficient algorithms, various sparse problem formulations are considered, such as unconstrained formulation, primal-dual formulation, or conic formulation. But, because of the nature of nondifferentiable objective functions, those problems are still challenging to solve in an efficient way. Thus, the Nesterov smoothing methodology is utilized to reformulate nonsmooth functions into smooth ones, and the accelerated proximal gradient algorithm is adopted to solve the smoothed optimization problem. Convergence analysis is conducted as well. The accuracy and efficiency of smoothed sparse recovery methods are demonstrated for the DoA estimation example. In the second topic, estimation of directions-of-arrival in the spatial covariance model is studied. Unlike the compressed sensing methods which discretize the search domain into possible directions on a grid, the theory of super resolution is applied to estimate DoAs in the continuous domain. We reformulate the spatial spectral covariance model into a multiple measurement vectors (MMV)-like model, and propose a block total variation norm minimization approach, which is the analog of Group Lasso in the super-resolution framework and that promotes the group-sparsity. The DoAs can be estimated by solving its dual problem via semidefinite programming. This gridless recovery approach is verified by simulation results for both uncorrelated and correlated source signals. In the last topic, we consider the array calibration issue for DoA estimation, and extend the previously considered single measurement vector model to multiple measurement vectors. By exploiting multiple measurement snapshots, a modified nuclear norm minimization problem is proposed to recover a low-rank matrix with high probability. The definition of linear operator for the MMV model is given, and its corresponding matrix representation is derived so that a reformulated convex optimization problem can be solved numerically. In order to alleviate computational complexity of the method, we use singular value decomposition (SVD) to reduce the problem size. Furthermore, the structured perturbation in the sparse array self-calibration estimation problem is considered as well. The performance and efficiency of the proposed methods are demonstrated by numerical results

    Wideband DOA Estimation via Sparse Bayesian Learning over a Khatri-Rao Dictionary

    Get PDF
    This paper deals with the wideband direction-of-arrival (DOA) estimation by exploiting the multiple measurement vectors (MMV) based sparse Bayesian learning (SBL) framework. First, the array covariance matrices at different frequency bins are focused to the reference frequency by the conventional focusing technique and then transformed into the vector form. Then a matrix called the Khatri-Rao dictionary is constructed by using the Khatri-Rao product and the multiple focused array covariance vectors are set as the new observations. DOA estimation is to find the sparsest representations of the new observations over the Khatri-Rao dictionary via SBL. The performance of the proposed method is compared with other well-known focusing based wideband algorithms and the Cramer-Rao lower bound (CRLB). The results show that it achieves higher resolution and accuracy and can reach the CRLB under relative demanding conditions. Moreover, the method imposes no restriction on the pattern of signal power spectral density and due to the increased number of rows of the dictionary, it can resolve more sources than sensors

    A Compact Formulation for the â„“2,1\ell_{2,1} Mixed-Norm Minimization Problem

    Full text link
    Parameter estimation from multiple measurement vectors (MMVs) is a fundamental problem in many signal processing applications, e.g., spectral analysis and direction-of- arrival estimation. Recently, this problem has been address using prior information in form of a jointly sparse signal structure. A prominent approach for exploiting joint sparsity considers mixed-norm minimization in which, however, the problem size grows with the number of measurements and the desired resolution, respectively. In this work we derive an equivalent, compact reformulation of the â„“2,1\ell_{2,1} mixed-norm minimization problem which provides new insights on the relation between different existing approaches for jointly sparse signal reconstruction. The reformulation builds upon a compact parameterization, which models the row-norms of the sparse signal representation as parameters of interest, resulting in a significant reduction of the MMV problem size. Given the sparse vector of row-norms, the jointly sparse signal can be computed from the MMVs in closed form. For the special case of uniform linear sampling, we present an extension of the compact formulation for gridless parameter estimation by means of semidefinite programming. Furthermore, we derive in this case from our compact problem formulation the exact equivalence between the â„“2,1\ell_{2,1} mixed-norm minimization and the atomic-norm minimization. Additionally, for the case of irregular sampling or a large number of samples, we present a low complexity, grid-based implementation based on the coordinate descent method
    • …
    corecore