34 research outputs found

    Source Separation for Hearing Aid Applications

    Get PDF

    Doppler Shift Compensation Schemes in VANETs

    Get PDF

    Applications of compressive sensing to direction of arrival estimation

    Get PDF
    Die SchĂ€tzung der Einfallsrichtungen (Directions of Arrival/DOA) mehrerer ebener Wellenfronten mit Hilfe eines Antennen-Arrays ist eine der prominentesten Fragestellungen im Gebiet der Array-Signalverarbeitung. Das nach wie vor starke Forschungsinteresse in dieser Richtung konzentriert sich vor allem auf die Reduktion des Hardware-Aufwands, im Sinne der KomplexitĂ€t und des Energieverbrauchs der EmpfĂ€nger, bei einem vorgegebenen Grad an Genauigkeit und Robustheit gegen Mehrwegeausbreitung. Diese Dissertation beschĂ€ftigt sich mit der Anwendung von Compressive Sensing (CS) auf das Gebiet der DOA-SchĂ€tzung mit dem Ziel, hiermit die KomplexitĂ€t der EmpfĂ€ngerhardware zu reduzieren und gleichzeitig eine hohe Richtungsauflösung und Robustheit zu erreichen. CS wurde bereits auf das DOA-Problem angewandt unter der Ausnutzung der Tatsache, dass eine Superposition ebener Wellenfronten mit einer winkelabhĂ€ngigen Leistungsdichte korrespondiert, die ĂŒber den Winkel betrachtet sparse ist. Basierend auf der Idee wurden CS-basierte Algorithmen zur DOA-SchĂ€tzung vorgeschlagen, die sich durch eine geringe RechenkomplexitĂ€t, Robustheit gegenĂŒber Quellenkorrelation und FlexibilitĂ€t bezĂŒglich der Wahl der Array-Geometrie auszeichnen. Die Anwendung von CS fĂŒhrt darĂŒber hinaus zu einer erheblichen Reduktion der Hardware-KomplexitĂ€t, da weniger EmpfangskanĂ€le benötigt werden und eine geringere Datenmenge zu verarbeiten und zu speichern ist, ohne dabei wesentliche Informationen zu verlieren. Im ersten Teil der Arbeit wird das Problem des Modellfehlers bei der CS-basierten DOA-SchĂ€tzung mit gitterbehafteten Verfahren untersucht. Ein hĂ€ufig verwendeter Ansatz um das CS-Framework auf das DOA-Problem anzuwenden ist es, den kontinuierlichen Winkel-Parameter zu diskreditieren und damit ein Dictionary endlicher GrĂ¶ĂŸe zu bilden. Da die tatsĂ€chlichen Winkel fast sicher nicht auf diesem Gitter liegen werden, entsteht dabei ein unvermeidlicher Modellfehler, der sich auf die SchĂ€tzalgorithmen auswirkt. In der Arbeit wird ein analytischer Ansatz gewĂ€hlt, um den Effekt der Gitterfehler auf die rekonstruierten Spektra zu untersuchen. Es wird gezeigt, dass sich die Messung einer Quelle aus beliebiger Richtung sehr gut durch die erwarteten Antworten ihrer beiden Nachbarn auf dem Gitter annĂ€hern lĂ€sst. Darauf basierend wird ein einfaches und effizientes Verfahren vorgeschlagen, den Gitterversatz zu schĂ€tzen. Dieser Ansatz ist anwendbar auf einzelne Quellen oder mehrere, rĂ€umlich gut separierte Quellen. FĂŒr den Fall mehrerer dicht benachbarter Quellen wird ein numerischer Ansatz zur gemeinsamen SchĂ€tzung des Gitterversatzes diskutiert. Im zweiten Teil der Arbeit untersuchen wir das Design kompressiver Antennenarrays fĂŒr die DOA-SchĂ€tzung. Die Kompression im Sinne von Linearkombinationen der Antennensignale, erlaubt es, Arrays mit großer Apertur zu entwerfen, die nur wenige EmpfangskanĂ€le benötigen und sich konfigurieren lassen. In der Arbeit wird eine einfache Empfangsarchitektur vorgeschlagen und ein allgemeines Systemmodell diskutiert, welches verschiedene Optionen der tatsĂ€chlichen Hardware-Realisierung dieser Linearkombinationen zulĂ€sst. Im Anschluss wird das Design der Gewichte des analogen Kombinations-Netzwerks untersucht. Numerische Simulationen zeigen die Überlegenheit der vorgeschlagenen kompressiven Antennen-Arrays im Vergleich mit dĂŒnn besetzten Arrays der gleichen KomplexitĂ€t sowie kompressiver Arrays mit zufĂ€llig gewĂ€hlten Gewichten. Schließlich werden zwei weitere Anwendungen der vorgeschlagenen AnsĂ€tze diskutiert: CS-basierte VerzögerungsschĂ€tzung und kompressives Channel Sounding. Es wird demonstriert, dass die in beiden Gebieten durch die Anwendung der vorgeschlagenen AnsĂ€tze erhebliche Verbesserungen erzielt werden können.Direction of Arrival (DOA) estimation of plane waves impinging on an array of sensors is one of the most important tasks in array signal processing, which have attracted tremendous research interest over the past several decades. The estimated DOAs are used in various applications like localization of transmitting sources, massive MIMO and 5G Networks, tracking and surveillance in radar, and many others. The major objective in DOA estimation is to develop approaches that allow to reduce the hardware complexity in terms of receiver costs and power consumption, while providing a desired level of estimation accuracy and robustness in the presence of multiple sources and/or multiple paths. Compressive sensing (CS) is a novel sampling methodology merging signal acquisition and compression. It allows for sampling a signal with a rate below the conventional Nyquist bound. In essence, it has been shown that signals can be acquired at sub-Nyquist sampling rates without loss of information provided they possess a sufficiently sparse representation in some domain and that the measurement strategy is suitably chosen. CS has been recently applied to DOA estimation, leveraging the fact that a superposition of planar wavefronts corresponds to a sparse angular power spectrum. This dissertation investigates the application of compressive sensing to the DOA estimation problem with the goal to reduce the hardware complexity and/or achieve a high resolution and a high level of robustness. Many CS-based DOA estimation algorithms have been proposed in recent years showing tremendous advantages with respect to the complexity of the numerical solution while being insensitive to source correlation and allowing arbitrary array geometries. Moreover, CS has also been suggested to be applied in the spatial domain with the main goal to reduce the complexity of the measurement process by using fewer RF chains and storing less measured data without the loss of any significant information. In the first part of the work we investigate the model mismatch problem for CS based DOA estimation algorithms off the grid. To apply the CS framework a very common approach is to construct a finite dictionary by sampling the angular domain with a predefined sampling grid. Therefore, the target locations are almost surely not located exactly on a subset of these grid points. This leads to a model mismatch which deteriorates the performance of the estimators. We take an analytical approach to investigate the effect of such grid offsets on the recovered spectra showing that each off-grid source can be well approximated by the two neighboring points on the grid. We propose a simple and efficient scheme to estimate the grid offset for a single source or multiple well-separated sources. We also discuss a numerical procedure for the joint estimation of the grid offsets of closer sources. In the second part of the thesis we study the design of compressive antenna arrays for DOA estimation that aim to provide a larger aperture with a reduced hardware complexity and allowing reconfigurability, by a linear combination of the antenna outputs to a lower number of receiver channels. We present a basic receiver architecture of such a compressive array and introduce a generic system model that includes different options for the hardware implementation. We then discuss the design of the analog combining network that performs the receiver channel reduction. Our numerical simulations demonstrate the superiority of the proposed optimized compressive arrays compared to the sparse arrays of the same complexity and to compressive arrays with randomly chosen combining kernels. Finally, we consider two other applications of the sparse recovery and compressive arrays. The first application is CS based time delay estimation and the other one is compressive channel sounding. We show that the proposed approaches for sparse recovery off the grid and compressive arrays show significant improvements in the considered applications compared to conventional methods

    Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

    Get PDF
    Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, are a very promising direction for 5G due to their ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO systems, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this dissertation, we improve the performance of FDD massive MIMO networks in terms of downlink training overhead reduction, by designing an efficient downlink beamforming method and developing a new algorithm to estimate the channel state information based on compressive sensing techniques. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink direction of arrivals (DoAs) and downlink direction of departures (DoDs), we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that selects the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional methods in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming methods

    Guided Matching Pursuit and its Application to Sound Source Separation

    Get PDF
    In the last couple of decades there has been an increasing interest in the application of source separation technologies to musical signal processing. Given a signal that consists of a mixture of musical sources, source separation aims at extracting and/or isolating the signals that correspond to the original sources. A system capable of high quality source separation could be an invaluable tool for the sound engineer as well as the end user. Applications of source separation include, but are not limited to, remixing, up-mixing, spatial re-configuration, individual source modification such as filtering, pitch detection/correction and time stretching, music transcription, voice recognition and source-specific audio coding to name a few. Of particular interest is the problem of separating sources from a mixture comprising two channels (2.0 format) since this is still the most commonly used format in the music industry and most domestic listening environments. When the number of sources is greater than the number of mixtures (which is usually the case with stereophonic recordings) then the problem of source separation becomes under-determined and traditional source separation techniques, such as “Independent Component Analysis” (ICA) cannot be successfully applied. In such cases a family of techniques known as “Sparse Component Analysis” (SCA) are better suited. In short a mixture signal is decomposed into a new domain were the individual sources are sparsely represented which implies that their corresponding coefficients will have disjoint (or almost) disjoint supports. Taking advantage of this property along with the spatial information within the mixture and other prior information that could be available, it is possible to identify the sources in the new domain and separate them by going back to the time domain. It is a fact that sparse representations lead to higher quality separation. Regardless, the most commonly used front-end for a SCA system is the ubiquitous short-time Fourier transform (STFT) which although is a sparsifying transform it is not the best choice for this job. A better alternative is the matching pursuit (MP) decomposition. MP is an iterative algorithm that decomposes a signal into a set of elementary waveforms called atoms chosen from an over-complete dictionary in such a way so that they represent the inherent signal structures. A crucial part of MP is the creation of the dictionary which directly affects the results of the decomposition and subsequently the quality of source separation. Selecting an appropriate dictionary could prove a difficult task and an adaptive approach would be appropriate. This work proposes a new MP variant termed guided matching pursuit (GMP) which adds a new pre-processing step into the main sequence of the MP algorithm. The purpose of this step is to perform an analysis of the signal and extract important features, termed guide maps, that are used to create dynamic mini-dictionaries comprising atoms which are expected to correlate well with the underlying signal structures thus leading to focused and more efficient searches around particular supports of the signal. This algorithm is accompanied by a modular and highly flexible MATLAB implementation which is suited to the processing of long duration audio signals. Finally the new algorithm is applied to the source separation of two-channel linear instantaneous mixtures and preliminary testing demonstrates that the performance of GMP is on par with the performance of state of the art systems

    Advances in parameter estimation, source enumeration, and signal identification for wireless communications

    Get PDF
    Parameter estimation and signal identification play an important role in modern wireless communication systems. In this thesis, we address different parameter estimation and signal identification problems in conjunction with the Internet of Things (IoT), cognitive radio systems, and high speed mobile communications. The focus of Chapter 2 of this thesis is to develop a new uplink multiple access (MA) scheme for the IoT in order to support ubiquitous massive uplink connectivity for devices with sporadic traffic pattern and short packet size. The proposed uplink MA scheme removes the Media Access Control (MAC) address through the signal identification algorithms which are employed at the gateway. The focus of Chapter 3 of this thesis is to develop different maximum Doppler spread (MDS) estimators in multiple-input multiple-output (MIMO) frequency-selective fading channel. The main idea behind the proposed estimators is to reduce the computational complexity while increasing system capacity. The focus of Chapter 4 and Chapter 5 of this thesis is to develop different antenna enumeration algorithms and signal-to-noise ratio (SNR) estimators in MIMO timevarying fading channels, respectively. The main idea is to develop low-complexity algorithms and estimators which are robust to channel impairments. The focus of Chapter 6 of this thesis is to develop a low-complexity space-time block codes (STBC)s identification algorithms for cognitive radio systems. The goal is to design an algorithm that is robust to time-frequency transmission impairments

    Sensor Signal and Information Processing II

    Get PDF
    In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing

    Looking beyond Pixels:Theory, Algorithms and Applications of Continuous Sparse Recovery

    Get PDF
    Sparse recovery is a powerful tool that plays a central role in many applications, including source estimation in radio astronomy, direction of arrival estimation in acoustics or radar, super-resolution microscopy, and X-ray crystallography. Conventional approaches usually resort to discretization, where the sparse signals are estimated on a pre-defined grid. However, sparse signals do not line up conveniently on any grid in reality. While the discrete setup usually leads to a simple optimization problem that can be solved with standard tools, there are two noticeable drawbacks: (i) Because of the model mismatch, the effective noise level is increased; (ii) The minimum reachable resolution is limited by the grid step-size. Because of the limitations, it is essential to develop a technique that estimates sparse signals in the continuous-domain--in essence seeing beyond pixels. The aims of this thesis are (i) to further develop a continuous-domain sparse recovery framework based on finite rate of innovation (FRI) sampling on both theoretical and algorithmic aspects; (ii) adapt the proposed technique to several applications, namely radio astronomy point source estimation, direction of arrival estimation in acoustics, and single image up-sampling; (iii) show that the continuous-domain sparse recovery approach can surpass the instrument resolution limit and achieve super-resolution. We propose a continuous-domain sparse recovery technique by generalizing the FRI sampling framework to cases with non-uniform measurements. We achieve this by identifying a set of unknown uniform sinusoidal samples and the linear transformation that links the uniform samples of sinusoids to the measurements. The continuous-domain sparsity constraint can be equivalently enforced with a discrete convolution equation of these sinusoidal samples. The sparse signal is reconstructed by minimizing the fitting error between the given and the re-synthesized measurements subject to the sparsity constraint. Further, we develop a multi-dimensional sampling framework for Diracs in two or higher dimensions with linear sample complexity. This is a significant improvement over previous methods, which have a complexity that increases exponentially with dimension. An efficient algorithm has been proposed to find a valid solution to the continuous-domain sparse recovery problem such that the reconstruction (i) satisfies the sparsity constraint; and (ii) fits the measurements (up to the noise level). We validate the flexibility and robustness of the FRI-based continuous-domain sparse recovery in both simulations and experiments with real data. We show that the proposed method surpasses the diffraction limit of radio telescopes with both realistic simulation and real data from the LOFAR radio telescope. In addition, FRI-based sparse reconstruction requires fewer measurements and smaller baselines to reach a similar reconstruction quality compared with conventional methods. Next, we apply the proposed approach to direction of arrival estimation in acoustics. We show that accurate off-grid source locations can be reliably estimated from microphone measurements with arbitrary array geometries. Finally, we demonstrate the effectiveness of the continuous-domain sparsity constraint in regularizing an otherwise ill-posed inverse problem, namely single-image super-resolution. By incorporating image edge models, the up-sampled image retains sharp edges and is free from ringing artifacts

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link
    corecore