19 research outputs found

    Approches tomographiques structurelles pour l'analyse du milieu urbain par tomographie SAR THR : TomoSAR

    No full text
    SAR tomography consists in exploiting multiple images from the same area acquired from a slightly different angle to retrieve the 3-D distribution of the complex reflectivity on the ground. As the transmitted waves are coherent, the desired spatial information (along with the vertical axis) is coded in the phase of the pixels. Many methods have been proposed to retrieve this information in the past years. However, the natural redundancies of the scene are generally not exploited to improve the tomographic estimation step. This Ph.D. presents new approaches to regularize the estimated reflectivity density obtained through SAR tomography by exploiting the urban geometrical structures.La tomographie SAR exploite plusieurs acquisitions d'une mĂȘme zone acquises d'un point de vue lĂ©gerement diffĂ©rent pour reconstruire la densitĂ© complexe de rĂ©flectivitĂ© au sol. Cette technique d'imagerie s'appuyant sur l'Ă©mission et la rĂ©ception d'ondes Ă©lectromagnĂ©tiques cohĂ©rentes, les donnĂ©es analysĂ©es sont complexes et l'information spatiale manquante (selon la verticale) est codĂ©e dans la phase. De nombreuse mĂ©thodes ont pu ĂȘtre proposĂ©es pour retrouver cette information. L'utilisation des redondances naturelles Ă  certains milieux n'est toutefois gĂ©nĂ©ralement pas exploitĂ©e pour amĂ©liorer l'estimation tomographique. Cette thĂšse propose d'utiliser l'information structurelle propre aux structures urbaines pour rĂ©gulariser les densitĂ©s de rĂ©flecteurs obtenues par cette technique

    Sparse Array Beamformer Design via ADMM

    Full text link
    In this paper, we devise a sparse array design algorithm for adaptive beamforming. Our strategy is based on finding a sparse beamformer weight to maximize the output signal-to-interference-plus-noise ratio (SINR). The proposed method utilizes the alternating direction method of multipliers (ADMM), and admits closed-form solutions at each ADMM iteration. The algorithm convergence properties are analyzed by showing the monotonicity and boundedness of the augmented Lagrangian function. In addition, we prove that the proposed algorithm converges to the set of Karush-Kuhn-Tucker stationary points. Numerical results exhibit its excellent performance, which is comparable to that of the exhaustive search approach, slightly better than those of the state-of-the-art solvers, including the semidefinite relaxation (SDR), its variant (SDR-V), and the successive convex approximation (SCA) approaches, and significantly outperforms several other sparse array design strategies, in terms of output SINR. Moreover, the proposed ADMM algorithm outperforms the SDR, SDR-V, and SCA methods, in terms of computational complexity.Comment: Accepted by IEEE Transactions on Signal Processin

    Overcoming DoF Limitation in Robust Beamforming: A Penalized Inequality-Constrained Approach

    Full text link
    A well-known challenge in beamforming is how to optimally utilize the degrees of freedom (DoF) of the array to design a robust beamformer, especially when the array DoF is smaller than the number of sources in the environment. In this paper, we leverage the tool of constrained convex optimization and propose a penalized inequality-constrained minimum variance (P-ICMV) beamformer to address this challenge. Specifically, we propose a beamformer with a well-targeted objective function and inequality constraints to achieve the design goals. The constraints on interferences penalize the maximum gain of the beamformer at any interfering directions. This can efficiently mitigate the total interference power regardless of whether the number of interfering sources is less than the array DoF or not. Multiple robust constraints on the target protection and interference suppression can be introduced to increase the robustness of the beamformer against steering vector mismatch. By integrating the noise reduction, interference suppression, and target protection, the proposed formulation can efficiently obtain a robust beamformer design while optimally trade off various design goals. When the array DoF is fewer than the number of interferences, the proposed formulation can effectively align the limited DoF to all of the sources to obtain the best overall interference suppression.  \ To numerically solve this problem, we formulate the P-ICMV beamformer design as a convex second-order cone program (SOCP) and propose a low complexity iterative algorithm based on the alternating direction method of multipliers (ADMM). Three applications are simulated to demonstrate the effectiveness of the proposed beamformer.Comment: submitted to IEEE Transactions on Signal Processin

    Sparse Array Signal Processing

    Get PDF
    This dissertation details three approaches for direction-of-arrival (DOA) estimation or beamforming in array signal processing from the perspective of sparsity. In the first part of this dissertation, we consider sparse array beamformer design based on the alternating direction method of multipliers (ADMM); in the second part of this dissertation, the problem of joint DOA estimation and distorted sensor detection is investigated; and off-grid DOA estimation is studied in the last part of this dissertation. In the first part of this thesis, we devise a sparse array design algorithm for adaptive beamforming. Our strategy is based on finding a sparse beamformer weight to maximize the output signal-to-interference-plus-noise ratio (SINR). The proposed method utilizes ADMM, and admits closed-form solutions at each ADMM iteration. The algorithm convergence properties are analyzed by showing the monotonicity and boundedness of the augmented Lagrangian function. In addition, we prove that the proposed algorithm converges to the set of Karush-Kuhn-Tucker stationary points. Numerical results exhibit its excellent performance, which is comparable to that of the exhaustive search approach, slightly better than those of the state-of-the-art solvers, and significantly outperforms several other sparse array design strategies, in terms of output SINR. Moreover, the proposed ADMM algorithm outperforms its competitors, in terms of computational cost. Distorted sensors could occur randomly and may lead to the breakdown of a sensor array system. In the second part of this thesis, we consider an array model in which a small number of sensors are distorted by unknown sensor gain and phase errors. With such an array model, the problem of joint DOA estimation and distorted sensor detection is formulated under the framework of low-rank and row-sparse decomposition. We derive an iteratively reweighted least squares (IRLS) algorithm to solve the resulting problem. The convergence property of the IRLS algorithm is analyzed by means of the monotonicity and boundedness of the objective function. Extensive simulations are conducted in view of parameter selection, convergence speed, computational complexity, and performance of DOA estimation as well as distorted sensor detection. Even though the IRLS algorithm is slightly worse than the ADMM in detecting the distorted sensors, the results show that our approach outperforms several state-of-the-art techniques in terms of convergence speed, computational cost, and DOA estimation performance. In the last part of this thesis, the problem of off-grid DOA estimation is investigated. We develop a method to jointly estimate the closest spatial frequency (the sine of DOA) grids, and the gaps between the estimated grids and the corresponding frequencies. By using a second-order Taylor approximation, the data model under the framework of joint-sparse representation is formulated. We point out an important property of the signals of interest in the model, namely the proportionality relationship. The proportionality relationship is empirically demonstrated to be useful in the sense that it increases the probability of the mixing matrix satisfying the block restricted isometry property. Simulation examples demonstrate the effectiveness and superiority of the proposed method against several state-of-the-art grid-based approaches

    Vital Sign Monitoring in Dynamic Environment via mmWave Radar and Camera Fusion

    Full text link
    Contact-free vital sign monitoring, which uses wireless signals for recognizing human vital signs (i.e, breath and heartbeat), is an attractive solution to health and security. However, the subject's body movement and the change in actual environments can result in inaccurate frequency estimation of heartbeat and respiratory. In this paper, we propose a robust mmWave radar and camera fusion system for monitoring vital signs, which can perform consistently well in dynamic scenarios, e.g., when some people move around the subject to be tracked, or a subject waves his/her arms and marches on the spot. Three major processing modules are developed in the system, to enable robust sensing. Firstly, we utilize a camera to assist a mmWave radar to accurately localize the subjects of interest. Secondly, we exploit the calculated subject position to form transmitting and receiving beamformers, which can improve the reflected power from the targets and weaken the impact of dynamic interference. Thirdly, we propose a weighted multi-channel Variational Mode Decomposition (WMC-VMD) algorithm to separate the weak vital sign signals from the dynamic ones due to subject's body movement. Experimental results show that, the 90th{^{th}} percentile errors in respiration rate (RR) and heartbeat rate (HR) are less than 0.5 RPM (respirations per minute) and 6 BPM (beats per minute), respectively

    Inverse Problem Formulation and Deep Learning Methods for Ultrasound Beamforming and Image Reconstruction

    Get PDF
    Ultrasound imaging is among the most common medical imaging modalities, which has the advantages of being real-time, non-invasive, cost-effective, and portable. Medical ultrasound images, however, have low values of signal-to-noise ratio due to many factors, and there has been a long-standing line of research on improving the quality of ultrasound images. Ultrasound transducers are made from piezoelectric elements, which are responsible for the insonification of the medium with non-invasive acoustic waves and also the reception of backscattered signals. Design optimizations span all steps of the image formation pipeline, including system architecture, hardware development, and software algorithms. Each step entails parameter optimizations and trade-offs in order to achieve a balance in competing effects such as cost, performance, and efficiency. The current thesis is devoted to research on image reconstruction techniques in order to push forward the classical limitations. It is tried not to be restricted into a specific class of computational imaging or machine learning method. As such, classical approaches and recent methods based on deep learning are adapted according to the requirements and limitations of the image reconstruction problem. In other words, we aim to reconstruct a high-quality spatial map of the medium echogenicity from raw channel data received from piezoelectric elements. All other steps of the ultrasound image formation pipeline are considered fixed, and the goal is to extract the best possible image quality (in terms of resolution, contrast, speckle pattern, etc.) from echo traces acquired by transducer elements. Two novel approaches are proposed on super-resolution ultrasound imaging by training deep models that create mapping functions from observations recorded from a single transmission to high-quality images. These models are mainly developed to resolve the necessity of several transmissions, which can potentially be used in applications that require both high framerate and image quality. The remaining four contributions are on beamforming, which is an essential step in medical ultrasound image reconstruction. Different approaches, including independent component analysis, deep learning, and inverse problem formulations, are utilized to tackle the ill-posed inverse problem of receive beamforming. The primary goal of novel beamformers is to find a solution to the trade-off between image quality and framerate. The final chapter consists of concluding remarks on each of our contributions, where the strengths and weaknesses of our proposed techniques based on classical computational imaging and deep learning methods are outlined. There is still a large room for improvement in all of our proposed techniques, and the thesis is concluded by providing avenues for future research to attain those improvements

    Non-convex Quadratically Constrained Quadratic Programming: Hidden Convexity, Scalable Approximation and Applications

    Get PDF
    University of Minnesota Ph.D. dissertation. September 2017. Major: Electrical Engineering. Advisor: Nicholas Sidiropoulos. 1 computer file (PDF); viii, 85 pages.Quadratically Constrained Quadratic Programming (QCQP) constitutes a class of computationally hard optimization problems that have a broad spectrum of applications in wireless communications, networking, signal processing, power systems, and other areas. The QCQP problem is known to be NP–hard in its general form; only in certain special cases can it be solved to global optimality in polynomial-time. Such cases are said to be convex in a hidden way, and the task of identifying them remains an active area of research. Meanwhile, relatively few methods are known to be effective for general QCQP problems. The prevailing approach of Semidefinite Relaxation (SDR) is computationally expensive, and often fails to work for general non-convex QCQP problems. Other methods based on Successive Convex Approximation (SCA) require initialization from a feasible point, which is NP-hard to compute in general. This dissertation focuses on both of the above mentioned aspects of non-convex QCQP. In the first part of this work, we consider the special case of QCQP with Toeplitz-Hermitian quadratic forms and establish that it possesses hidden convexity, which makes it possible to obtain globally optimal solutions in polynomial-time. The second part of this dissertation introduces a framework for efficiently computing feasible solutions of general quadratic feasibility problems. While an approximation framework known as Feasible Point Pursuit-Successive Convex Approximation (FPP-SCA) was recently proposed for this task, with considerable empirical success, it remains unsuitable for application on large-scale problems. This work is primarily focused on speeding and scaling up these approximation schemes to enable dealing with large-scale problems. For this purpose, we reformulate the feasibility criteria employed by FPP-SCA for minimizing constraint violations in the form of non-smooth, non-convex penalty functions. We demonstrate that by employing judicious approximation of the penalty functions, we obtain problem formulations which are well suited for the application of first-order methods (FOMs). The appeal of using FOMs lies in the fact that they are capable of efficiently exploiting various forms of problem structure while being computationally lightweight. This endows our approximation algorithms the ability to scale well with problem dimension. Specific applications in wireless communications and power grid system optimization considered to illustrate the efficacy of our FOM based approximation schemes. Our experimental results reveal the surprising effectiveness of FOMs for this class of hard optimization problems
    corecore