50 research outputs found
A Framework for Developing and Evaluating Algorithms for Estimating Multipath Propagation Parameters from Channel Sounder Measurements
A framework is proposed for developing and evaluating algorithms for
extracting multipath propagation components (MPCs) from measurements collected
by channel sounders at millimeter-wave frequencies. Sounders equipped with an
omnidirectional transmitter and a receiver with a uniform planar array (UPA)
are considered. An accurate mathematical model is developed for the spatial
frequency response of the sounder that incorporates the non-ideal cross-polar
beampatterns for the UPA elements. Due to the limited Field-of-View (FoV) of
each element, the model is extended to accommodate multi-FoV measurements in
distinct azimuth directions. A beamspace representation of the spatial
frequency response is leveraged to develop three progressively complex
algorithms aimed at solving the singlesnapshot maximum likelihood estimation
problem: greedy matching pursuit (CLEAN), space-alternative generalized
expectationmaximization (SAGE), and RiMAX. The first two are based on purely
specular MPCs whereas RiMAX also accommodates diffuse MPCs. Two approaches for
performance evaluation are proposed, one with knowledge of ground truth
parameters, and one based on reconstruction mean-squared error. The three
algorithms are compared through a demanding channel model with hundreds of MPCs
and through real measurements. The results demonstrate that CLEAN gives quite
reasonable estimates which are improved by SAGE and RiMAX. Lessons learned and
directions for future research are discussed.Comment: 17 page
Carrier frequency offset recovery for zero-IF OFDM receivers
As trends in broadband wireless communications applications demand faster development cycles, smaller sizes, lower costs, and ever increasing data rates, engineers continually seek new ways to harness evolving technology. The zero intermediate frequency receiver architecture has now become popular as it has both economic and size advantages over the traditional superheterodyne architecture.
Orthogonal Frequency Division Multiplexing (OFDM) is a popular multi-carrier modulation technique with the ability to provide high data rates over echo ladened channels. It has excellent robustness to impairments caused by multipath, which includes frequency selective fading. Unfortunately, OFDM is very sensitive to the carrier frequency offset (CFO) that is introduced by the downconversion process. The objective of this thesis is to develop and to analyze an algorithm for blind CFO recovery suitable for use with a practical zero-Intermediate Frequency (zero-IF) OFDM telecommunications system.
A blind CFO recovery algorithm based upon characteristics of the received signal's power spectrum is proposed. The algorithm's error performance is mathematically analyzed, and the theoretical results are verified with simulations. Simulation shows that the performance of the proposed algorithm agrees with the mathematical analysis.
A number of other CFO recovery techniques are compared to the proposed algorithm. The proposed algorithm performs well in comparison and does not suffer from many of the disadvantages of existing blind CFO recovery techniques. Most notably, its performance is not significantly degraded by noisy, frequency selective channels
Modelling and detection of faults in axial-flux permanent magnet machines
The development of various topologies and configurations of axial-flux permanent magnet machine has spurred its use for electromechanical energy conversion in several applications. As it becomes increasingly deployed, effective condition monitoring built on reliable and accurate fault detection techniques is needed to ensure its engineering integrity. Unlike induction machine which has been rigorously investigated for faults, axial-flux permanent magnet machine has not. Thus in this thesis, axial-flux permanent magnet machine is investigated under faulty conditions. Common faults associated with it namely; static eccentricity and interturn short circuit are modelled, and detection techniques are established. The modelling forms a basis for; developing a platform for precise fault replication on a developed experimental test-rig, predicting and analysing fault signatures using both finite element analysis and experimental analysis. In the detection, the motor current signature analysis, vibration analysis and electrical impedance spectroscopy are applied. Attention is paid to fault-feature extraction and fault discrimination. Using both frequency and time-frequency techniques, features are tracked in the line current under steady-state and transient conditions respectively. Results obtained provide rich information on the pattern of fault harmonics. Parametric spectral estimation is also explored as an alternative to the Fourier transform in the steady-state analysis of faulty conditions. It is found to be as effective as the Fourier transform and more amenable to short signal-measurement duration. Vibration analysis is applied in the detection of eccentricities; its efficacy in fault detection is hinged on proper determination of vibratory frequencies and quantification of corresponding tones. This is achieved using analytical formulations and signal processing techniques. Furthermore, the developed fault model is used to assess the influence of cogging torque minimization techniques and rotor topologies in axial-flux permanent magnet machine on current signal in the presence of static eccentricity. The double-sided topology is found to be tolerant to the presence of static eccentricity unlike the single-sided topology due to the opposing effect of the resulting asymmetrical properties of the airgap. The cogging torque minimization techniques do not impair on the established fault detection technique in the single-sided topology. By applying electrical broadband impedance spectroscopy, interturn faults are diagnosed; a high frequency winding model is developed to analyse the impedance-frequency response obtained
Exact and approximate Strang-Fix conditions to reconstruct signals with finite rate of innovation from samples taken with arbitrary kernels
In the last few years, several new methods have been developed for the sampling and
exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and
reconstruction schemes. An example of valid kernels, which we use throughout the thesis,
is given by the family of exponential reproducing functions. These satisfy the generalised
Strang-Fix conditions, which ensure that proper linear combinations of the kernel with its
shifted versions reproduce polynomials or exponentials exactly.
The first contribution of the thesis is to analyse the behaviour of these kernels in the
case of noisy measurements in order to provide clear guidelines on how to choose the exponential
reproducing kernel that leads to the most stable reconstruction when estimating
FRI signals from noisy samples. We then depart from the situation in which we can choose
the sampling kernel and develop a new strategy that is universal in that it works with any
kernel. We do so by noting that meeting the exact exponential reproduction condition is
too stringent a constraint. We thus allow for a controlled error in the reproduction formula
in order to use the exponential reproduction idea with arbitrary kernels and develop
a universal reconstruction method which is stable and robust to noise.
Numerical results validate the various contributions of the thesis and in particular show
that the approximate exponential reproduction strategy leads to more stable and accurate
reconstruction results than those obtained when using the exact recovery methods.Open Acces
Phase estimation in a navigation receiver
Tässä lisensiaatintutkimuksessa esitetään menetelmä näytteistetyn sinimuotoisen signaalin vaiheen estimointiin silloin, kun taajuus on tunnettu. Menetelmän nimi on vaihekorjattu korrelaatio (PCC) ja sillä voi estimoida vaiheen myös niissä tapauksissa, joissa signaalista ei ole kokonaisluvullista määrää jaksoja mittausvälissä. PCC-vaihe-estimaatin suorituskykyä tutkitaan vertaamalla sen neliösummavirhettä (MSE) Cramér-Rao alarajaan (CRLB). Jotta menetelmän analysointi ja vertailu läheisten menetelmien kanssa olisi helpompaa, signaalimallina on yksi sinimuotoinen signaali valkoisessa Gaussisessa kohinassa. Työssä esitetään lisäksi kaksi menetelmää häiriöisen signaalin vaihe-estimaatin neliösummavirheen pienentämiseen. Tyypillisiä häiriölähteitä ovat salamat ja läheisellä taajuudella toimivat lähettimet; menetelmät ovat vastaavasti nimeltään purskehäiriöiden poisto ja virheellisten ositteiden poisto. PCC-taajuusestimaatti saadaan seuraamalla signaalin vaiheen muuttumista peräkkäisissä mittausväleissä ja sen suorituskykyä sekä laskentakuormaa verrataan Interpoloituun DFT:hen (IDFT).
Menetelmän sovellusalue on meteorologinen luotausjärjestelmä, joka käyttää VLF-navigointiverkkoja yläilmakehän tuulenmittaukseen. Estimointiongelmana on arvioida Doppler-ilmiön aiheuttama pienenpieni taajuussiirtymä. Venäläisen Alpharadionavigointiverkon lähetystaajuudet ovat erityisen haasteellisia, koska käytetyssä 400 ms:n mittausvälissä ei ole kokonaisluvullista määrää signaalin jaksoja. Useimmat taajuuden- ja vaiheenestimointimenetelmät eivät ole soveliaita tähän estimointiongelmaan. IDFT saattaisi olla käyttökelpoinen ja siksi sitä on käytetty vertailukohtana.
Tietokonesimulaatioin osoitetaan, että vaihe-estimaatin MSE on lähellä CRLB:tä. Sama koskee taajuusestimaatteja, jotka on saatu seuraamalla signaalin vaiheen muuttumista peräkkäisissä mittausväleissä. Simulaatiot osoittavat myös, että PCC-taajuusestimaatin MSE on lähempänä CRLB:tä kuin IDFT-taajuusestimaatin MSE. Koska PCC saavuttaa tämän suorituskyvyn pienemmällä laskentakuormalla, se on soveliaampi kyseiseen sovellukseen. Lisäksi osoitetaan, että vaihe-estimaatin MSE pienenee, kun näytteenottotaajuutta tai mittausväliä kasvatetaan, tai kun salamoiden ja läheisellä taajuudella toimivien lähettimien aiheuttamat häiriöt poistetaan purskehäiriöiden poisto ja virheellisten ositteiden poisto -algoritmeilla. Lopuksi esitetään muutamia signaaliprosessoritoteutukseen (DSP) liittyviä yksityiskohtia, joilla voidaan pienentää laskentakuormaa.This thesis proposes a new method for estimating the unknown phase of a sampled sinusoid of known frequency. The method is called phase corrected correlation (PCC) and it is targeted specifically for the case, when there is a non-integer number of cycles in the measurement interval. Performance of the PCC phase estimate is studied by comparing its mean squared error (MSE) with the Cramér-Rao lower bound (CRLB). In order to simplify analysis and comparison with related methods, the selected signal model is a single sinusoid in additive white Gaussian noise.
Two additional algorithms, burst noise removal and partition outlier removal, are proposed for decreasing the MSE of phase estimates in the presence of disturbances such as lightnings and interfering transmitters. PCC frequency estimate is obtained by observing signal phase change in consecutive measurement intervals. Frequency estimation performance and computational burden of the PCC is compared with Interpolated DFT (IDFT).
The application domain is a meteorological sounding system for upper-air wind finding using Very Low Frequency (VLF) navigation systems. The problem is to estimate a minute frequency offset caused by the Doppler effect. Frequencies transmitted especially by the Russian Alpha radionavigation system are challenging: the estimation algorithm must be able handle a non-integer number of signal cycles in the 400 ms measurement interval. Most of the related frequency and phase estimation methods are not applicable to this estimation problem. Interpolated DFT (IDFT) may be feasible and therefore it is used as a benchmark.
It is shown with computer simulations, that MSE of the phase estimate is close to the CRLB. The same applies to frequency estimates obtained by observing signal phase change in consecutive measurement intervals. Comparison with IDFT shows, that MSE of the PCC frequency estimate is closer to the CRLB as MSE of the IDFT frequency estimate. Moreover, PCC achieves this performance with lower computational burden, making it the preferred choice in this application. It is also shown that MSE of the phase estimate decreases as sampling rate or measurement interval is increased, and that MSE of the phase estimate decreases when interference is removed using burst noise removal and partition outlier removal algorithms. Finally, to achieve a computationally efficient digital signal processor (DSP) implementation, a number of implementation issues are covered
Applications of compressive sensing to direction of arrival estimation
Die Schätzung der Einfallsrichtungen (Directions of Arrival/DOA) mehrerer ebener Wellenfronten mit Hilfe eines Antennen-Arrays ist eine der prominentesten Fragestellungen im Gebiet der Array-Signalverarbeitung. Das nach wie vor starke Forschungsinteresse in dieser Richtung konzentriert sich vor allem auf die Reduktion des Hardware-Aufwands, im Sinne der Komplexität und des Energieverbrauchs der Empfänger, bei einem vorgegebenen Grad an Genauigkeit und Robustheit gegen Mehrwegeausbreitung. Diese Dissertation beschäftigt sich mit der Anwendung von Compressive Sensing (CS) auf das Gebiet der DOA-Schätzung mit dem Ziel, hiermit die Komplexität der Empfängerhardware zu reduzieren und gleichzeitig eine hohe Richtungsauflösung und Robustheit zu erreichen. CS wurde bereits auf das DOA-Problem angewandt unter der Ausnutzung der Tatsache, dass eine Superposition ebener Wellenfronten mit einer winkelabhängigen Leistungsdichte korrespondiert, die über den Winkel betrachtet sparse ist. Basierend auf der Idee wurden CS-basierte Algorithmen zur DOA-Schätzung vorgeschlagen, die sich durch eine geringe Rechenkomplexität, Robustheit gegenüber Quellenkorrelation und Flexibilität bezüglich der Wahl der Array-Geometrie auszeichnen. Die Anwendung von CS führt darüber hinaus zu einer erheblichen Reduktion der Hardware-Komplexität, da weniger Empfangskanäle benötigt werden und eine geringere Datenmenge zu verarbeiten und zu speichern ist, ohne dabei wesentliche Informationen zu verlieren. Im ersten Teil der Arbeit wird das Problem des Modellfehlers bei der CS-basierten DOA-Schätzung mit gitterbehafteten Verfahren untersucht. Ein häufig verwendeter Ansatz um das CS-Framework auf das DOA-Problem anzuwenden ist es, den kontinuierlichen Winkel-Parameter zu diskreditieren und damit ein Dictionary endlicher Größe zu bilden. Da die tatsächlichen Winkel fast sicher nicht auf diesem Gitter liegen werden, entsteht dabei ein unvermeidlicher Modellfehler, der sich auf die Schätzalgorithmen auswirkt. In der Arbeit wird ein analytischer Ansatz gewählt, um den Effekt der Gitterfehler auf die rekonstruierten Spektra zu untersuchen. Es wird gezeigt, dass sich die Messung einer Quelle aus beliebiger Richtung sehr gut durch die erwarteten Antworten ihrer beiden Nachbarn auf dem Gitter annähern lässt. Darauf basierend wird ein einfaches und effizientes Verfahren vorgeschlagen, den Gitterversatz zu schätzen. Dieser Ansatz ist anwendbar auf einzelne Quellen oder mehrere, räumlich gut separierte Quellen. Für den Fall mehrerer dicht benachbarter Quellen wird ein numerischer Ansatz zur gemeinsamen Schätzung des Gitterversatzes diskutiert. Im zweiten Teil der Arbeit untersuchen wir das Design kompressiver Antennenarrays für die DOA-Schätzung. Die Kompression im Sinne von Linearkombinationen der Antennensignale, erlaubt es, Arrays mit großer Apertur zu entwerfen, die nur wenige Empfangskanäle benötigen und sich konfigurieren lassen. In der Arbeit wird eine einfache Empfangsarchitektur vorgeschlagen und ein allgemeines Systemmodell diskutiert, welches verschiedene Optionen der tatsächlichen Hardware-Realisierung dieser Linearkombinationen zulässt. Im Anschluss wird das Design der Gewichte des analogen Kombinations-Netzwerks untersucht. Numerische Simulationen zeigen die Überlegenheit der vorgeschlagenen kompressiven Antennen-Arrays im Vergleich mit dünn besetzten Arrays der gleichen Komplexität sowie kompressiver Arrays mit zufällig gewählten Gewichten. Schließlich werden zwei weitere Anwendungen der vorgeschlagenen Ansätze diskutiert: CS-basierte Verzögerungsschätzung und kompressives Channel Sounding. Es wird demonstriert, dass die in beiden Gebieten durch die Anwendung der vorgeschlagenen Ansätze erhebliche Verbesserungen erzielt werden können.Direction of Arrival (DOA) estimation of plane waves impinging on an array of sensors is one of the most important tasks in array signal processing, which have attracted tremendous research interest over the past several decades. The estimated DOAs are used in various applications like localization of transmitting sources, massive MIMO and 5G Networks, tracking and surveillance in radar, and many others. The major objective in DOA estimation is to develop approaches that allow to reduce the hardware complexity in terms of receiver costs and power consumption, while providing a desired level of estimation accuracy and robustness in the presence of multiple sources and/or multiple paths. Compressive sensing (CS) is a novel sampling methodology merging signal acquisition and compression. It allows for sampling a signal with a rate below the conventional Nyquist bound. In essence, it has been shown that signals can be acquired at sub-Nyquist sampling rates without loss of information provided they possess a sufficiently sparse representation in some domain and that the measurement strategy is suitably chosen. CS has been recently applied to DOA estimation, leveraging the fact that a superposition of planar wavefronts corresponds to a sparse angular power spectrum. This dissertation investigates the application of compressive sensing to the DOA estimation problem with the goal to reduce the hardware complexity and/or achieve a high resolution and a high level of robustness. Many CS-based DOA estimation algorithms have been proposed in recent years showing tremendous advantages with respect to the complexity of the numerical solution while being insensitive to source correlation and allowing arbitrary array geometries. Moreover, CS has also been suggested to be applied in the spatial domain with the main goal to reduce the complexity of the measurement process by using fewer RF chains and storing less measured data without the loss of any significant information. In the first part of the work we investigate the model mismatch problem for CS based DOA estimation algorithms off the grid. To apply the CS framework a very common approach is to construct a finite dictionary by sampling the angular domain with a predefined sampling grid. Therefore, the target locations are almost surely not located exactly on a subset of these grid points. This leads to a model mismatch which deteriorates the performance of the estimators. We take an analytical approach to investigate the effect of such grid offsets on the recovered spectra showing that each off-grid source can be well approximated by the two neighboring points on the grid. We propose a simple and efficient scheme to estimate the grid offset for a single source or multiple well-separated sources. We also discuss a numerical procedure for the joint estimation of the grid offsets of closer sources. In the second part of the thesis we study the design of compressive antenna arrays for DOA estimation that aim to provide a larger aperture with a reduced hardware complexity and allowing reconfigurability, by a linear combination of the antenna outputs to a lower number of receiver channels. We present a basic receiver architecture of such a compressive array and introduce a generic system model that includes different options for the hardware implementation. We then discuss the design of the analog combining network that performs the receiver channel reduction. Our numerical simulations demonstrate the superiority of the proposed optimized compressive arrays compared to the sparse arrays of the same complexity and to compressive arrays with randomly chosen combining kernels. Finally, we consider two other applications of the sparse recovery and compressive arrays. The first application is CS based time delay estimation and the other one is compressive channel sounding. We show that the proposed approaches for sparse recovery off the grid and compressive arrays show significant improvements in the considered applications compared to conventional methods
Recommended from our members
A Cognitive Radio Compressive Sensing Framework
With the proliferation of wireless devices and services, allied with further significant predicted growth, there is an ever increasing demand for higher transmission rates. This is especially challenging given the limited availability of radio spectrum, and is further exacerbated by a rigid licensing regulatory regime. Spectrum however, is largely underutilized and this has prompted regulators to promote the concept of opportunistic spectrum access. This allows unlicensed secondary users to use bands which are licensed to primary users, but are currently unoccupied, so leading to more efficient spectrum utilization.
A potentially attractive solution to this spectrum underutilisation problem is cognitive radio (CR) technology, which enables the identification and usage of vacant bands by continuously sensing the radio environment, though CR enforces stringent timing requirements and high sampling rates. Compressive sensing (CS) has emerged as a novel sampling paradigm, which provides the theoretical basis to resolve some of these issues, especially for signals exhibiting sparsity in some domain. For CR-related signals however, existing CS architectures such as the random demodulator and compressive multiplexer have limitations in regard to the signal types used, spectrum estimation methods applied, spectral band classification and a dependence on Fourier domain based sparsity.
This thesis presents a new generic CS framework which addresses these issues by specifically embracing three original scientific contributions: i) seamless embedding of the concept of precolouring into existing CS architectures to enhance signal sparsity for CR-related digital modulation schemes; ii) integration of the multitaper spectral estimator to improve sparsity in CR narrowband modulation schemes; and iii) exploiting sparsity in an alternative, non-Fourier (Walsh-Hadamard) domain to expand the applicable CR-related modulation schemes.
Critical analysis reveals the new CS framework provides a consistently superior and robust solution for the recovery of an extensive set of currently employed CR-type signals encountered in wireless communication standards. Significantly, the generic and portable nature of the framework affords the opportunity for further extensions into other CS architectures and sparsity domains