6 research outputs found

    Interference Removal for Radar/Communication Co-existence: the Random Scattering Case

    Full text link
    In this paper we consider an un-cooperative spectrum sharing scenario, wherein a radar system is to be overlaid to a pre-existing wireless communication system. Given the order of magnitude of the transmitted powers in play, we focus on the issue of interference mitigation at the communication receiver. We explicitly account for the reverberation produced by the (typically high-power) radar transmitter whose signal hits scattering centers (whether targets or clutter) producing interference onto the communication receiver, which is assumed to operate in an un-synchronized and un-coordinated scenario. We first show that receiver design amounts to solving a non-convex problem of joint interference removal and data demodulation: next, we introduce two algorithms, both exploiting sparsity of a proper representation of the interference and of the vector containing the errors of the data block. The first algorithm is basically a relaxed constrained Atomic Norm minimization, while the latter relies on a two-stage processing structure and is based on alternating minimization. The merits of these algorithms are demonstrated through extensive simulations: interestingly, the two-stage alternating minimization algorithm turns out to achieve satisfactory performance with moderate computational complexity

    From Symmetry to Geometry: Tractable Nonconvex Problems

    Full text link
    As science and engineering have become increasingly data-driven, the role of optimization has expanded to touch almost every stage of the data analysis pipeline, from the signal and data acquisition to modeling and prediction. The optimization problems encountered in practice are often nonconvex. While challenges vary from problem to problem, one common source of nonconvexity is nonlinearity in the data or measurement model. Nonlinear models often exhibit symmetries, creating complicated, nonconvex objective landscapes, with multiple equivalent solutions. Nevertheless, simple methods (e.g., gradient descent) often perform surprisingly well in practice. The goal of this survey is to highlight a class of tractable nonconvex problems, which can be understood through the lens of symmetries. These problems exhibit a characteristic geometric structure: local minimizers are symmetric copies of a single "ground truth" solution, while other critical points occur at balanced superpositions of symmetric copies of the ground truth, and exhibit negative curvature in directions that break the symmetry. This structure enables efficient methods to obtain global minimizers. We discuss examples of this phenomenon arising from a wide range of problems in imaging, signal processing, and data analysis. We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries. This area is rich with observed phenomena and open problems; we close by highlighting directions for future research.Comment: review paper submitted to SIAM Review, 34 pages, 10 figure

    Subspace Learning for Data Arising from a Union of Subspaces of High Relative Dimension

    Get PDF
    As we have witnessed the rapid growth of statistical machine learning over the past decades, the ability of processing big and corrupted data becomes increasingly important. One of the major challenges is that structured data, such as images, videos and 3D point clouds, involved in many application scenarios are high-dimensional. Conventional techniques usually approximate the high-dimensional data with low-dimensional structures by fitting the data with one or more linear subspaces. However, their theory and algorithms are restricted to the setting in which the underlying subspaces have a low relative dimension compared to the ambient space. This thesis attempts to advance the understanding of subspace learning for data arising from subspaces of high relative dimension, as well as develop efficient algorithms for handling big and corrupted data. The first major contribution of this thesis is a theoretical analysis that extends Dual Principal Component Pursuit (DPCP), a non-convex approach that learns a hyperplane in the presence of noiseless data, to learn a subspace of any dimension with noisy data. We provide geometric and probabilistic analyses to characterize how the principal angles between the global solution and the orthogonal complement of the subspace behave as a function of the noise level. Moreover, we improve the DPCP theory in multi-hyperplane case with a more interpretable geometric analysis and a new statistical analysis. The second major contribution of this thesis is the development of a linearly convergent method for non-convex optimization on the Grassmannian. We show that if the objective function satisfies a certain Riemannian regularity condition (RRC) with respect to some point in the Grassmannian, then a Projected Riemannian Sub-Gradient Method (PRSGM) converges at a linear rate to that point. In particular, we prove that the DPCP problem for learning a single subspace satisfies the RRC and PRSGM converges linearly to a neighborhood of the orthogonal complement of the subspace with error proportional to the noise level. We also extend the RRC to DPCP for a union of hyperplanes and prove the linear convergence of PRSGM to a specific hyperplane. Finally, both synthetic and real experiments demonstrate the superiority of the proposed method
    corecore