9 research outputs found

    Time-delay estimation under non-clustered and clustered scenarios for GNSS signals

    Get PDF
    Tese (doutorado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2021.Aplicações que empregam sistemas globais de navegação por satélite, do inglês Global Navigation Satellite Systems (GNSS) para prover posicionamento acurado estão sujeitos a degradação drástica não só por intereferências eletromagnéticas, como também componentes de multipercurso causados por reflexões e refrações no ambiente. Aplicações de segurança crítica como veículos autonômos e aviação civil, e aplicações de risco crítico como gestão de pesca, pedágio automático, e agricultura de precisão dependem de posicionamento acurado sob cenários complicados. Tipicamente quanto mais agrupamento ocorre entre o componente de linha de visada, do inglês line-of-sight (LOS) e componentes de multipercurso ou não-linha de visada, do inglês non-line-of-sight (NLOS), menos acurada é a estimação da posição. Abordagens tensorials estado da arte para receptores GNSS baseado em arranjos de antenas utilizam processamento tensorial de sinais para separar o componente LOS dos componentes NLOS, assim mitigando os efeitos destes, utilizando decomposição em valores singulares multilinear, do inglês multilinear singular value decomposition (MLSVD) para gerar um autofiltro de order superior, do inglês higher-order eigenfilter (HOE) com pré-processamento por média frente-costas, do inglês forward-backward averaging (FBA), e suavização espacial expandida, do inglês expanded spatial smoothing (ESPS), estimação de direção de chegada, do inglês direction of arrival (DoA) e fatorização Khatri-Rao, do inglês Khatri-Rao factorization (KRF), estimação de Procrustes e fatorização Khatri-Rao (ProKRaft), e o sistema semi-algébrico de decomposição poliádica canônica por diagonalização matricial simultânea, do inglês semi-algebraic framework for approximate canonical polyadic decomposition via simultaneous matrix diagonalization (SECSI), respectivamente. Propomos duas abordagens de processamento para estimação de atraso, do inglês time-delay estimation (TDE). A primeira é a abordagem em lotes utilizando dados de vários períodos do sinal. Usando estimação em lotes propomos duas abordagens algébricas para TDE, em que diagonalizaçao é efetivada por decomposição generalizada em autovalores, do inglês generalized eigenvalue decomposition (GEVD), das primeiras duas fatias frontais do tensor núcleo do tensor de dados, estimado por MLSVD. Esta primeira abordagem, como os métodos citados, na quais simulações foram feitas com 1 componente LOS e 1 componente NLOS, assim os dados observados tem posto cheio em todos seus modos, não faz suposições sobre o posto do tensor de dados. A segunda abordagem supõe cenários nos quais mais de 1 componente NLOS está presente e são agregados (clustered em inglês), assim vários vetores de uma das matrizes-fator que formam o tensor de dados são altamente correlacionaiii dos, resultando num tensor de dados que é de posto deficiente em pelo menos um modo. Os esquemas algébricos baseados em tensores propostos utilizam a decomposição poliádica canônica por decomposição generalizada em autovalores, do inglês canonical polyadic decomposition via generalized eigenvalue decomposition (CPD-GEVD), e a decomposição em termos de posto-(Lr, Lr, 1) por decomposição generalizada em autovalores, do inglês decomposition in multilinear rank-(Lr, Lr, 1) terms via generalized eigenvalue decomposition ((Lr, Lr, 1)-GEVD) para melhorar a TDE do componente LOS sob cenários desafiadores. A segunda é a abordagem de processamento adaptativo de amostras individuais utilizando rastreamento de subespaço a cada período de código, epoch em inglês. Usando processamento adaptativo propomos duas abordagem, uma aplicando FBA expandido (EFBA) e ESPS ao dados e estimando um HOE, e outra usando usa estimação paramétrica para estimar a DoA. Estendendo o modelo para um arranjo retangular uniforme, do inglês uniform rectangular array (URA), o fluxo de dados são tensores de terceira ordem. Para este modelo propomos três abordagens para TDE baseado em HOE, CPD-GEVD, e ESPRIT tensorial, respectivamente e empregando uma estratégia de truncamento sequencial para reduzir a quantidade de operações necessárias para cada modo do tensorCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).Applications employing Global Navigation Satellite Systems (GNSS) to provide accurate positioning are subject to drastic degradation not only due to electromagnetic interference, but also due to multipath components caused by reflections and refractions in the environment. Safety-critical applications such as autonomous vehicles and civil aviation, and liability-critical applications such as fisheries management, automatic tolling, and precision agriculture depend on accurate positioning under such demanding scenarios. Typically, the more clustering occurs between the line-of-sight (LOS) component and multipath or non-line-of-sight (NLOS) components, the more inaccurate is the estimation of the positioning. State-of-the-art tensor based approaches for antenna array-based GNSS receivers apply tensor-based signal processing to separate the LOS components from NLOS components, thus mitigating the effects of the latter, using the multilinear singular value decomposition (MLSVD) to generate a higher-order eigenfilter (HOE) with forward-backward averaging (FBA) and expanded spatial smoothing (ESPS) preprocessing, direction of arrival (DoA) estimation and Khatri-Rao factorization (KRF), Procrustes estimation and Khatri-Rao factorization (ProKRaft), and the semi-algebraic framework for approximate canonical polyadic decomposition via simultaneous matrix diagonalization (SECSI), respectively. These approaches use filtering, parameter estimation and filtering, iterative algebraic factor matrix estimation and filtering, and algebraic factor matrix estimation, respectively. We propose two processing approaches to time-delay estimation (TDE). The first is batch processing taking data from several signal periods. Using batch processing we propose two algebraic approaches to TDE, in which diagonalization is achieved using the generalized eigenvalue decomposition (GEVD) of the first two frontal slices of the measurement tensor’s core tensor, estimated via MLSVD. The former approach, like the cited methods, in which simulations were performed with 1 LOS component and 1 NLOS component, and thus the measured data has full-rank tensor in all its modes, makes no assumption about the rank of the measurement tensor. The latter approach assumes scenarios in which more than 1 NLOS component is present and these are clustered, thus several vectors of one of the factor matrices which forms the tensor data are highly correlated, resulting in a rank-deficient measurement tensor in at least one mode. These proposed algebraic tensor-based schemes utilize the canonical polyadic decomposition via generalized eigenvalue decomposition (CPD-GEVD) and the decomposition in multilinear rank-(Lr, Lr, 1) terms via generalized eigenvalue decomposition ((Lr, Lr, 1)-GEVD) in order to improve the TDE of the LOS component in challenging scev narios. The second approach is adaptive processing of individual samples utilizing subspace tracking to iteratively estimate the subspace at each epoch. Using adaptive processing we propose two approaches, one applying FBA and ESPS to the data and estimating a higherorder eigenfilter, and the other using a parametric approach using DoA estimation. By extending the data model for an uniform rectangular array, we have a data stream of third-order tensors. For this model we propose three approaches to TDE based on HOE, CPD-GEVD, and standard tensor ESPRIT, respectively and employing a sequential truncation strategy to reduce the amount of operations necessary for each tensor mode

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline

    Advanced array signal processing algorithms for multi-dimensional parameter estimation

    Get PDF
    Multi-dimensional high-resolution parameter estimation is a fundamental problem in a variety of array signal processing applications, including radar, mobile communications, multiple-input multiple-output (MIMO) channel estimation, and biomedical imaging. The objective is to estimate the frequency parameters of noise-corrupted multi-dimensional harmonics that are sampled on a multi-dimensional grid. Among the proposed parameter estimation algorithms to solve this problem, multi-dimensional (R-D) ESPRIT-type algorithms have been widely used due to their computational efficiency and their simplicity. Their performance in various scenarios has been objectively evaluated by means of an analytical performance assessment framework. Recently, a relatively new class of parameter estimators based on sparse signal reconstruction has gained popularity due to their robustness under challenging conditions such as a small sample size or strong signal correlation. A common approach towards further improving the performance of parameter estimation algorithms is to exploit prior knowledge on the structure of the signals. In this thesis, we develop enhanced versions of R-D ESPRIT-type algorithms and the relatively new class of sparsity-based parameter estimation algorithms by exploiting the multi-dimensional structure of the signals and the statistical properties of strictly non-circular (NC) signals. First, we derive analytical expressions for the gain from forward-backward averaging and tensor-based processing in R-D ESPRIT-type and R-D Tensor-ESPRIT-type algorithms for the special case of two sources. This is accomplished by simplifying the generic analytical MSE expressions from the performance analysis of R-D ESPRIT-type algorithms. The derived expressions allow us to identify the parameter settings, e.g., the number of sensors, the signal correlation, and the source separation, for which both gains are most pronounced or no gain is achieved. Second, we propose the generalized least squares (GLS) algorithm to solve the overdetermined shift invariance equation in R-D ESPRIT-type algorithms. GLS directly incorporates the statistics of the subspace estimation error into the shift invariance solution through its covariance matrix, which is found via a first-order perturbation expansion. To objectively assess the estimation accuracy, we derive performance analysis expressions for the mean square error (MSE) of GLS-based ESPRIT-type algorithms, which are asymptotic in the effective SNR, i.e., the results become exact for a high SNR or a small sample size. Based on the performance analysis, we show that the simplified MSE expressions of GLS-based 1-D ESPRIT-type algorithms for a single source and two sources can be transformed into the corresponding Cramer-Rao bound (CRB) expressions, which provide a lower limit on the estimation error. Thereby, ESPRIT-type algorithms can become asymptotically efficient, i.e., they asymptotically achieve the CRB. Numerical simulations show that this can also be the case for more than two sources. In the third contribution, we derive matrix-based and tensor-based R-D NC ESPRIT-type algorithms for multi-dimensional strictly non-circular signals, where R-D NC Tensor-ESPRIT-type algorithms exploit both the multi-dimensional structure and the strictly non-circular structure of the signals. Exploiting the NC signal structure by means of a preprocessing step leads to a virtual doubling of the original sensor array, which provides an improved estimation accuracy and doubles the number of resolvable signals. We derive an analytical performance analysis and compute simplified MSE expressions for a single source and two sources. These expressions are used to analytically compute the NC gain for these cases, which has so far only been studied via Monte-Carlo simulations. We additionally consider spatial smoothing preprocessing for R-D ESPRIT-type algorithms, which has been widely used to improve the estimation performance for highly correlated signals or a small sample size. Once more, we derive performance analysis expressions for R-D ESPRIT-type algorithms and their corresponding NC versions with spatial smoothing and derive the optimal number of subarrays for spatial smoothing that minimizes the MSE for a single source. In the next part, we focus on the relatively new concept of parameter estimation via sparse signal reconstruction (SSR), in which the sparsity of the received signal power spectrum in the spatio-temporal domain is exploited. We develop three NC SSR-based parameter estimation algorithms for strictly noncircular sources and show that the benefits of exploiting the signals’ NC structure can also be achieved via sparse reconstruction. We develop two grid-based NC SSR algorithms with a low-complexity off-grid estimation procedure, and a gridless NC SSR algorithm based on atomic norm minimization. As the final contribution of this thesis, we derive the deterministic R-D NC CRB for strictly non-circular sources, which serves as a benchmark for the presented R-D NC ESPRIT-type algorithms and the NC SSR-based parameter estimation algorithms. We show for the special cases of, e.g., full coherence, a single snapshot, or a single strictly non-circular source, that the deterministic R-D NC CRB reduces to the existing deterministic R-D CRB for arbitrary signals. Therefore, no NC gain can be achieved in these cases. For the special case of two closely-spaced NC sources, we simplify the NC CRB expression and compute the NC gain for two closely-spaced NC signals. Finally, its behavior in terms of the physical parameters is studied to determine the parameter settings that provide the largest NC gain.Die hochauflösende Parameterschätzung für mehrdimensionale Signale findet Anwendung in vielen Bereichen der Signalverarbeitung in Mehrantennensystemen. Zu den Anwendungsgebieten zählen beispielsweise Radar, die Mobilkommunikation, die Kanalschätzung in multiple-input multiple-output (MIMO)-Systemen und bildgebende Verfahren in der Biosignalverarbeitung. In letzter Zeit sind eine Vielzahl von Algorithmen zur Parameterschätzung entwickelt worden, deren Schätzgenauigkeit durch eine analytische Beschreibung der Leistungsfähigkeit objektiv bewertet werden kann. Eine verbreitete Methode zur Verbesserung der Schätzgenauigkeit von Parameterschätzverfahren ist die Ausnutzung von Vorwissen bezüglich der Signalstruktur. In dieser Arbeit werden mehrdimensionale ESPRIT-Verfahren als Beispiel für Unterraum-basierte Verfahren entwickelt und analysiert, die explizit die mehrdimensionale Signalstruktur mittels Tensor-Signalverarbeitung ausnutzt und die statistischen Eigenschaften von nicht-zirkulären Signalen einbezieht. Weiterhin werden neuartige auf Signalrekonstruktion basierende Algorithmen vorgestellt, die die nicht-zirkuläre Signalstruktur bei der Rekonstruktion ausnutzen. Die vorgestellten Verfahren ermöglichen eine deutlich verbesserte Schätzgüte und verdoppeln die Anzahl der auflösbaren Signale. Die Vielzahl der Forschungsbeiträge in dieser Arbeit setzt sich aus verschiedenen Teilen zusammen. Im ersten Teil wird die analytische Beschreibung der Leistungsfähigkeit von Matrix-basierten und Tensor-basierten ESPRIT-Algorithmen betrachtet. Die Tensor-basierten Verfahren nutzen explizit die mehrdimensionale Struktur der Daten aus. Es werden für beide Algorithmenarten vereinfachte analytische Ausdrücke für den mittleren quadratischen Schätzfehler für zwei Signalquellen hergeleitet, die lediglich von den physikalischen Parametern, wie zum Beispiel die Anzahl der Antennenelemente, das Signal-zu-Rausch-Verhältnis, oder die Anzahl der Messungen, abhängen. Ein Vergleich dieser Ausdrücke ermöglicht die Berechnung einfacher Ausdrücke für den Schätzgenauigkeitsgewinn durch den forward-backward averaging (FBA)-Vorverarbeitungsschritt und die Tensor-Signalverarbeitung, die die analytische Abhängigkeit von den physikalischen Parametern enthalten. Im zweiten Teil entwickeln wir einen neuartigen general least squares (GLS)-Ansatz zur Lösung der Verschiebungs-Invarianz-Gleichung, die die Grundlage der ESPRIT-Algorithmen darstellt. Der neue Lösungsansatz berücksichtigt die statistische Beschreibung des Fehlers bei der Unterraumschätzung durch dessen Kovarianzmatrix und ermöglicht unter bestimmten Annahmen eine optimale Lösung der Invarianz-Gleichung. Mittels einer Performanzanalyse der GLS-basierten ESPRIT-Verfahren und der Vereinfachung der analytischen Ausdrücke für den Schätzfehler für eine Signalquelle und zwei zeitlich unkorrelierte Signalquellen wird gezeigt, dass die Cramer-Rao-Schranke, eine untere Schranke für die Varianz eines Schätzers, erreicht werden kann. Im nächsten Teil werden Matrix-basierte und Tensor-basierte ESPRIT-Algorithmen für nicht-zirkuläre Signalquellen vorgestellt. Unter Ausnutzung der Signalstruktur gelingt es, die Schätzgenauigkeit zu erhöhen und die doppelte Anzahl an Quellen aufzulösen. Dabei ermöglichen die vorgeschlagenen Tensor-ESPRIT-Verfahren sogar die gleichzeitige Ausnutzung der mehrdimensionalen Signalstruktur und der nicht-zirkuläre Signalstruktur. Die Leistungsfähigkeit dieser Verfahren wird erneut durch eine analytische Beschreibung objektiv bewertet und Spezialfälle für eine und zwei Quellen betrachtet. Es zeigt sich, dass für eine Quelle keinerlei Gewinn durch die nicht-zirkuläre Struktur erzielen lässt. Für zwei nicht-zirkuläre Quellen werden vereinfachte Ausdrücke für den Gewinn sowohl im Matrixfall also auch im Tensorfall hergeleitet und die Abhängigkeit der physikalischen Parameter analysiert. Sind die Signale stark korreliert oder ist die Anzahl der Messdaten sehr gering, kann der spatial smoothing-Vorverarbeitungsschritt mit den verbesserten ESPRIT-Verfahren kombiniert werden. Anhand der Performanzanalyse wird die Anzahl der Mittellungen für das spatial smoothing-Verfahren analytisch für eine Quelle bestimmt, die den Schätzfehler minimiert. Der nächste Teil befasst sich mit einer vergleichsweise neuen Klasse von Parameterschätzverfahren, die auf der Rekonstruktion überlagerter dünnbesetzter Signale basiert. Als Vorteil gegenüber den Algorithmen, die eine Signalunterraumschätzung voraussetzen, sind die Rekonstruktionsverfahren verhältnismäßig robust im Falle einer geringen Anzahl zeitlicher Messungen oder einer starken Korrelation der Signale. In diesem Teil der vorliegenden Arbeit werden drei solcher Verfahren entwickelt, die bei der Rekonstruktion zusätzlich die nicht-zirkuläre Signalstruktur ausnutzen. Dadurch kann auch für diese Art von Verfahren eine höhere Schätzgenauigkeit erreicht werden und eine höhere Anzahl an Signalen rekonstruiert werden. Im letzten Kapitel der Arbeit wird schließlich die Cramer-Rao-Schranke für mehrdimensionale nicht-zirkuläre Signale hergeleitet. Sie stellt eine untere Schranke für den Schätzfehler aller Algorithmen dar, die speziell für die Ausnutzung dieser Signalstruktur entwickelt wurden. Im Vergleich zur bekannten Cramer-Rao-Schranke für beliebige Signale, zeigt sich, dass im Fall von zeitlich kohärenten Signalen, für einen Messvektor oder für eine Quelle, beide Schranken äquivalent sind. In diesen Fällen kann daher keine Verbesserung der Schätzgüte erzielt werden. Zusätzlich wird die Cramer-Rao-Schranke für zwei benachbarte nicht-zirkuläre Signalquellen vereinfacht und der maximal mögliche Gewinn in Abhängigkeit der physikalischen Parameter analytisch ermittelt. Dieser Ausdruck gilt als Maßstab für den erzielbaren Gewinn aller Parameterschätzverfahren für zwei nicht-zirkuläre Signalquellen

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Radio Communications

    Get PDF
    In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modified our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the field of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks

    Aeronautical engineering: A continuing bibliography with indexes (supplement 253)

    Get PDF
    This bibliography lists 637 reports, articles, and other documents introduced into the NASA scientific and technical information system in May, 1990. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Resource Allocation in Multi-user MIMO Networks: Interference Management and Cooperative Communications

    Get PDF
    Nowadays, wireless communications are becoming so tightly integrated in our daily lives, especially with the global spread of laptops, tablets and smartphones. This has paved the way to dramatically increasing wireless network dimensions in terms of subscribers and amount of flowing data. Therefore, the two important fundamental requirements for the future 5G wireless networks are abilities to support high data traffic and exceedingly low latency. A likely candidate to fulfill these requirements is multicell multi-user multi-input multiple-output (MU-MIMO); also termed as coordinated multi-point (CoMP) transmission and reception. To achieve the highest possible performance in MU-MIMO networks, a properly designed resource allocation algorithm is needed. Moreover, with the rapidly growing data traffic, interference has become a major limitation in wireless networks. Interference alignment (IA) has been shown to significantly manage the interference and improve the network performance. However, how practically use IA to mitigate interference in a downlink MU-MIMO network still remains an open problem. In this dissertation, we improve the performance of MU-MIMO networks in terms of spectral efficiency, by designing and developing new beamforming algorithms that can efficiently mitigate the interference and allocate the resources. Then we mathematically analyze the performance improvement of MUMIMO networks employing proposed techniques. Fundamental relationships between network parameters and the network performance is revealed, which provide guidance on the wireless networks design. Finally, system level simulations are conducted to investigate the performance of the proposed strategies
    corecore