105 research outputs found

    FDOA-based passive source localization: a geometric perspective

    Get PDF
    2018 Fall.Includes bibliographical references.We consider the problem of passively locating the source of a radio-frequency signal using observations by several sensors. Received signals can be compared to obtain time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements. The geometric relationship satisfied by these measurements allow us to make inferences about the emitter's location. In this research, we choose to focus on the FDOA-based source localization problem. This problem has been less widely studied and is more difficult than solving for an emitter's location using TDOA measurements. When the FDOA-based source localization problem is formulated as a system of polynomials, the source's position is contained in the corresponding algebraic variety. This provides motivation for the use of methods from algebraic geometry, specifically numerical algebraic geometry (NAG), to solve for the emitter's location and gain insight into this system's interesting structure

    Synthetic aperture source localization

    Get PDF
    2018 Summer.Includes bibliographical references.The detection and localization of sources of electromagnetic (EM) radiation has many applications in both civilian and defense communities. The goal of source localization is to identify the geographic position of an emitter of some radiation from measurements of the elds that the source produces. Although the problem has been studied intensively for many decades much work remains to be done. Many state-of-the-art methods require large numbers of sensors and perform poorly or require additional sensors when target emitters transmit highly correlated waveforms. Some methods also require a preprocessing step which attempts to identify regions of the data which come from emitters in the scene before processing the localization algorithm. Additionally, it has been proven that pure Angle of Arrival (AOA) techniques based on current methods are always suboptimal when multiple emitters are present. We present a new source localization technique which employs a cross correlation measure of the Time Dierence of Arrival (TDOA) for signals recorded at two separate platforms, at least one of which is in motion. This data is then backprojected through a Synthetic Aperture Radar (SAR)-like process to form an image of the locations of the emitters in a target scene. This method has the advantage of not requiring any a priori knowledge of the number of emitters in the scene. Nor does it rest on an ability to identify regions of the data which come from individual emitters, though if this capability is present it may improve image quality. Additionally we demonstrate that this method is capable of localizing emitters which transmit highly correlated waveforms, though complications arise when several such emitters are present in the scene. We discuss these complications and strategies to mitigate them. Finally we conclude with an overview of our method's performance for various levels of additive noise and lay out a path for advancing study of this new method through future work

    Dynamic Algorithms and Asymptotic Theory for Lp-norm Data Analysis

    Get PDF
    The focus of this dissertation is the development of outlier-resistant stochastic algorithms for Principal Component Analysis (PCA) and the derivation of novel asymptotic theory for Lp-norm Principal Component Analysis (Lp-PCA). Modern machine learning and signal processing applications employ sensors that collect large volumes of data measurements that are stored in the form of data matrices, that are often massive and need to be efficiently processed in order to enable machine learning algorithms to perform effective underlying pattern discovery. One such commonly used matrix analysis technique is PCA. Over the past century, PCA has been extensively used in areas such as machine learning, deep learning, pattern recognition, and computer vision, just to name a few. PCA\u27s popularity can be attributed to its intuitive formulation on the L2-norm, availability of an elegant solution via the singular-value-decomposition (SVD), and asymptotic convergence guarantees. However, PCA has been shown to be highly sensitive to faulty measurements (outliers) because of its reliance on the outlier-sensitive L2-norm. Arguably, the most straightforward approach to impart robustness against outliers is to replace the outlier-sensitive L2-norm by the outlier-resistant L1-norm, thus formulating what is known as L1-PCA. Exact and approximate solvers are proposed for L1-PCA in the literature. On the other hand, in this big-data era, the data matrix may be very large and/or the data measurements may arrive in streaming fashion. Traditional L1-PCA algorithms are not suitable in this setting. In order to efficiently process streaming data, while being resistant against outliers, we propose a stochastic L1-PCA algorithm that computes the dominant principal component (PC) with formal convergence guarantees. We further generalize our stochastic L1-PCA algorithm to find multiple components by propose a new PCA framework that maximizes the recently proposed Barron loss. Leveraging Barron loss yields a stochastic algorithm with a tunable robustness parameter that allows the user to control the amount of outlier-resistance required in a given application. We demonstrate the efficacy and robustness of our stochastic algorithms on synthetic and real-world datasets. Our experimental studies include online subspace estimation, classification, video surveillance, and image conditioning, among other things. Last, we focus on the development of asymptotic theory for Lp-PCA. In general, Lp-PCA for p\u3c2 has shown to outperform PCA in the presence of outliers owing to its outlier resistance. However, unlike PCA, Lp-PCA is perceived as a ``robust heuristic\u27\u27 by the research community due to the lack of theoretical asymptotic convergence guarantees. In this work, we strive to shed light on the topic by developing asymptotic theory for Lp-PCA. Specifically, we show that, for a broad class of data distributions, the Lp-PCs span the same subspace as the standard PCs asymptotically and moreover, we prove that the Lp-PCs are specific rotated versions of the PCs. Finally, we demonstrate the asymptotic equivalence of PCA and Lp-PCA with a wide variety of experimental studies

    Complex Neural Networks for Audio

    Get PDF
    Audio is represented in two mathematically equivalent ways: the real-valued time domain (i.e., waveform) and the complex-valued frequency domain (i.e., spectrum). There are advantages to the frequency-domain representation, e.g., the human auditory system is known to process sound in the frequency-domain. Furthermore, linear time-invariant systems are convolved with sources in the time-domain, whereas they may be factorized in the frequency-domain. Neural networks have become rather useful when applied to audio tasks such as machine listening and audio synthesis, which are related by their dependencies on high quality acoustic models. They ideally encapsulate fine-scale temporal structure, such as that encoded in the phase of frequency-domain audio, yet there are no authoritative deep learning methods for complex audio. This manuscript is dedicated to addressing the shortcoming. Chapter 2 motivates complex networks by their affinity with complex-domain audio, while Chapter 3 contributes methods for building and optimizing complex networks. We show that the naive implementation of Adam optimization is incorrect for complex random variables and show that selection of input and output representation has a significant impact on the performance of a complex network. Experimental results with novel complex neural architectures are provided in the second half of this manuscript. Chapter 4 introduces a complex model for binaural audio source localization. We show that, like humans, the complex model can generalize to different anatomical filters, which is important in the context of machine listening. The complex model\u27s performance is better than that of the real-valued models, as well as real- and complex-valued baselines. Chapter 5 proposes a two-stage method for speech enhancement. In the first stage, a complex-valued stochastic autoencoder projects complex vectors to a discrete space. In the second stage, long-term temporal dependencies are modeled in the discrete space. The autoencoder raises the performance ceiling for state of the art speech enhancement, but the dynamic enhancement model does not outperform other baselines. We discuss areas for improvement and note that the complex Adam optimizer improves training convergence over the naive implementation

    Radar Technology

    Get PDF
    In this book “Radar Technology”, the chapters are divided into four main topic areas: Topic area 1: “Radar Systems” consists of chapters which treat whole radar systems, environment and target functional chain. Topic area 2: “Radar Applications” shows various applications of radar systems, including meteorological radars, ground penetrating radars and glaciology. Topic area 3: “Radar Functional Chain and Signal Processing” describes several aspects of the radar signal processing. From parameter extraction, target detection over tracking and classification technologies. Topic area 4: “Radar Subsystems and Components” consists of design technology of radar subsystem components like antenna design or waveform design

    Compressed Sensing in the Presence of Side Information

    Get PDF
    Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization. After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse signals. CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices. Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction. A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme

    Optimizing Techniques and Cramer-Rao Bound for Passive Source Location Estimation

    Get PDF
    This work is motivated by the problem of locating potential unstable areas in underground potash mines with better accuracy more consistently while introducing minimum extra computational load. It is important for both efficient mine design and safe mining activities, since these unstable areas may experience local, low-intensity earthquakes in the vicinity of an underground mine. The object of this thesis is to present localization algorithms that can deliver the most consistent and accurate estimation results for the application of interest. As the first step towards the goal, three most representative source localization algorithms given in the literature are studied and compared. A one-step energy based grid search (EGS) algorithm is selected to address the needs of the application of interest. The next step is the development of closed-form Cram´er-Rao bound (CRB) expressions. The mathematical derivation presented in this work deals with continuous signals using the Karhunen-Lo`eve (K-L) expansion, which makes the derivation applicable to non-stationary Gaussian noise problems. Explicit closed-form CRB expressions are presented only for stationary Gaussian noise cases using the spectrum representation of the signal and noise though. Using the CRB comparisons, two approaches are proposed to further improve the EGS algorithm. The first approach utilizes the corresponding analytic expression of the error estimation variance (EEV) given in [1] to derive an amplitude weight expression, optimal in terms of minimizing this EEV, for the case of additive Gaussian noise with a common spectrum interpretation across all the sensors. An alternate noniterative amplitude weighting scheme is proposed based on the optimal amplitude weight expression. It achieves the same performance with less calculation compared with the traditional iterative approach. The second approach tries to optimize the EGS algorithm in the frequency domain. An analytic frequency weighted EEV expression is derived using spectrum representation and the stochastic process theory. Based on this EEV expression, an integral equation is established and solved using the calculus of variations technique. The solution corresponds to a filter transfer function that is optimal in the sense that it minimizes this analytic frequency domain EEV. When various parts of the frequency domain EEV expression are ignored during the minimization procedure using Cauchy-Schwarz inequality, several different filter transfer functions result. All of them turn out to be well known classical filters that have been developed in the literature and used to deal with source localization problems. This demonstrates that in terms of minimizing the analytic EEV, they are all suboptimal, not optimal. Monte Carlo simulation is performed and shows that both amplitude and frequency weighting bring obvious improvement over the unweighted EGS estimator

    Methods for Online UAV Path Planning for Tracking Multiple Objects

    Get PDF
    Unmanned aerial vehicles (UAVs) or drones have rapidly evolved to enable carrying various sensors such as thermal sensors for vision or antennas for radio waves. Therefore, drones can be transformative for applications such as surveillance and monitoring because they have the capability to greatly reduce the time and cost associated with traditional tasking methods. Realising this potential necessitates equipping UAVs with the ability to perform missions autonomously. This dissertation considers the problems of online path planning for UAVs for the fundamental task of surveillance comprising of tracking and discovering multiple mobile objects in a scene. Tracking and discovering an unknown and time-varying number of objects is a challenging problem in itself. Objects such as people or wildlife tend to switch between various modes of movements. Measurements received by the UAV’s on-board sensors are often very noisy. In practice, the on-board sensors have a limited field of view (FoV), hence, the UAV needs to move within range of the mobile objects that are scattered throughout a scene. This is extremely challenging because neither the exact number nor locations of the objects of interest are available to the UAV. Planning the path for UAVs to effectively detect and track multi-objects in such environments poses additional challenges. Path planning techniques for tracking a single object are not applicable. Since there are multiple moving objects appearing and disappearing in the region, following only certain objects to localise them accurately implies that a UAV is likely to miss many other objects. Furthermore, online path planning for multi-UAVs remains challenging due to the exponential complexity of multi-agent coordination problems. In this dissertation, we consider the problem of online path planning for UAV-based localisation and tracking of multi-objects. First, we realised a low cost on-board radio receiver system on aUAV and demonstrated the capability of the drone-based platform for autonomously tracking and locating multiple mobile radio-tagged objects in field trials. Second, we devised a track-before-detect filter coupled with an online path planning algorithm for joint detection and tracking of radio-tagged objects to achieve better performance in noisy environments. Third, we developed a multi-objective planning algorithm for multi-agents to track and search multi-objects under the practical constraint of detection range limited on-board sensors (or FoV limited sensors). Our formulation leads to a multi-objective value function that is a monotone submodular set function. Consequently, it allows us to employ a greedy algorithm for effectively controlling multi-agents with a performance guarantee for tracking discovered objects while searching for undiscovered mobile objects under practical constraints of limited FoV sensors. Fourth, we devised a fast distributed tracking algorithm that can effectively track multi-objects for a network of stationary agents with different FoVs. This is the first such solution to this problem. The proposed method can significantly improve capabilities of a network of agents to track a large number of objects moving in and out of the limited FoV of the agents’ sensors compared to existing methods that do not consider the problem of unknown and limited FoV of sensors.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Efficient algorithms and data structures for compressive sensing

    Get PDF
    Wegen der kontinuierlich anwachsenden Anzahl von Sensoren, und den stetig wachsenden Datenmengen, die jene produzieren, stößt die konventielle Art Signale zu verarbeiten, beruhend auf dem Nyquist-Kriterium, auf immer mehr Hindernisse und Probleme. Die kürzlich entwickelte Theorie des Compressive Sensing (CS) formuliert das Versprechen einige dieser Hindernisse zu beseitigen, indem hier allgemeinere Signalaufnahme und -rekonstruktionsverfahren zum Einsatz kommen können. Dies erlaubt, dass hierbei einzelne Abtastwerte komplexer strukturierte Informationen über das Signal enthalten können als dies bei konventiellem Nyquistsampling der Fall ist. Gleichzeitig verändert sich die Signalrekonstruktion notwendigerweise zu einem nicht-linearen Vorgang und ebenso müssen viele Hardwarekonzepte für praktische Anwendungen neu überdacht werden. Das heißt, dass man zwischen der Menge an Information, die man über Signale gewinnen kann, und dem Aufwand für das Design und Betreiben eines Signalverarbeitungssystems abwägen kann und muss. Die hier vorgestellte Arbeit trägt dazu bei, dass bei diesem Abwägen CS mehr begünstigt werden kann, indem neue Resultate vorgestellt werden, die es erlauben, dass CS einfacher in der Praxis Anwendung finden kann, wobei die zu erwartende Leistungsfähigkeit des Systems theoretisch fundiert ist. Beispielsweise spielt das Konzept der Sparsity eine zentrale Rolle, weshalb diese Arbeit eine Methode präsentiert, womit der Grad der Sparsity eines Vektors mittels einer einzelnen Beobachtung geschätzt werden kann. Wir zeigen auf, dass dieser Ansatz für Sparsity Order Estimation zu einem niedrigeren Rekonstruktionsfehler führt, wenn man diesen mit einer Rekonstruktion vergleicht, welcher die Sparsity des Vektors unbekannt ist. Um die Modellierung von Signalen und deren Rekonstruktion effizienter zu gestalten, stellen wir das Konzept von der matrixfreien Darstellung linearer Operatoren vor. Für die einfachere Anwendung dieser Darstellung präsentieren wir eine freie Softwarearchitektur und demonstrieren deren Vorzüge, wenn sie für die Rekonstruktion in einem CS-System genutzt wird. Konkret wird der Nutzen dieser Bibliothek, einerseits für das Ermitteln von Defektpositionen in Prüfkörpern mittels Ultraschall, und andererseits für das Schätzen von Streuern in einem Funkkanal aus Ultrabreitbanddaten, demonstriert. Darüber hinaus stellen wir für die Verarbeitung der Ultraschalldaten eine Rekonstruktionspipeline vor, welche Daten verarbeitet, die im Frequenzbereich Unterabtastung erfahren haben. Wir beschreiben effiziente Algorithmen, die bei der Modellierung und der Rekonstruktion zum Einsatz kommen und wir leiten asymptotische Resultate für die benötigte Anzahl von Messwerten, sowie die zu erwartenden Lokalisierungsgenauigkeiten der Defekte her. Wir zeigen auf, dass das vorgestellte System starke Kompression zulässt, ohne die Bildgebung und Defektlokalisierung maßgeblich zu beeinträchtigen. Für die Lokalisierung von Streuern mittels Ultrabreitbandradaren stellen wir ein CS-System vor, welches auf einem Random Demodulators basiert. Im Vergleich zu existierenden Messverfahren ist die hieraus resultierende Schätzung der Kanalimpulsantwort robuster gegen die Effekte von zeitvarianten Funkkanälen. Um den inhärenten Modellfehler, den gitterbasiertes CS begehen muss, zu beseitigen, zeigen wir auf wie Atomic Norm Minimierung es erlaubt ohne die Einschränkung auf ein endliches und diskretes Gitter R-dimensionale spektrale Komponenten aus komprimierten Beobachtungen zu schätzen. Hierzu leiten wir eine R-dimensionale Variante des ADMM her, welcher dazu in der Lage ist die Signalkovarianz in diesem allgemeinen Szenario zu schätzen. Weiterhin zeigen wir, wie dieser Ansatz zur Richtungsschätzung mit realistischen Antennenarraygeometrien genutzt werden kann. In diesem Zusammenhang präsentieren wir auch eine Methode, welche mittels Stochastic gradient descent Messmatrizen ermitteln kann, die sich gut für Parameterschätzung eignen. Die hieraus resultierenden Kompressionsverfahren haben die Eigenschaft, dass die Schätzgenauigkeit über den gesamten Parameterraum ein möglichst uniformes Verhalten zeigt. Zuletzt zeigen wir auf, dass die Kombination des ADMM und des Stochastic Gradient descent das Design eines CS-Systems ermöglicht, welches in diesem gitterfreien Szenario wünschenswerte Eigenschaften hat.Along with the ever increasing number of sensors, which are also generating rapidly growing amounts of data, the traditional paradigm of sampling adhering the Nyquist criterion is facing an equally increasing number of obstacles. The rather recent theory of Compressive Sensing (CS) promises to alleviate some of these drawbacks by proposing to generalize the sampling and reconstruction schemes such that the acquired samples can contain more complex information about the signal than Nyquist samples. The proposed measurement process is more complex and the reconstruction algorithms necessarily need to be nonlinear. Additionally, the hardware design process needs to be revisited as well in order to account for this new acquisition scheme. Hence, one can identify a trade-off between information that is contained in individual samples of a signal and effort during development and operation of the sensing system. This thesis addresses the necessary steps to shift the mentioned trade-off more to the favor of CS. We do so by providing new results that make CS easier to deploy in practice while also maintaining the performance indicated by theoretical results. The sparsity order of a signal plays a central role in any CS system. Hence, we present a method to estimate this crucial quantity prior to recovery from a single snapshot. As we show, this proposed Sparsity Order Estimation method allows to improve the reconstruction error compared to an unguided reconstruction. During the development of the theory we notice that the matrix-free view on the involved linear mappings offers a lot of possibilities to render the reconstruction and modeling stage much more efficient. Hence, we present an open source software architecture to construct these matrix-free representations and showcase its ease of use and performance when used for sparse recovery to detect defects from ultrasound data as well as estimating scatterers in a radio channel using ultra-wideband impulse responses. For the former of these two applications, we present a complete reconstruction pipeline when the ultrasound data is compressed by means of sub-sampling in the frequency domain. Here, we present the algorithms for the forward model, the reconstruction stage and we give asymptotic bounds for the number of measurements and the expected reconstruction error. We show that our proposed system allows significant compression levels without substantially deteriorating the imaging quality. For the second application, we develop a sampling scheme to acquire the channel Impulse Response (IR) based on a Random Demodulator that allows to capture enough information in the recorded samples to reliably estimate the IR when exploiting sparsity. Compared to the state of the art, this in turn allows to improve the robustness to the effects of time-variant radar channels while also outperforming state of the art methods based on Nyquist sampling in terms of reconstruction error. In order to circumvent the inherent model mismatch of early grid-based compressive sensing theory, we make use of the Atomic Norm Minimization framework and show how it can be used for the estimation of the signal covariance with R-dimensional parameters from multiple compressive snapshots. To this end, we derive a variant of the ADMM that can estimate this covariance in a very general setting and we show how to use this for direction finding with realistic antenna geometries. In this context we also present a method based on a Stochastic gradient descent iteration scheme to find compression schemes that are well suited for parameter estimation, since the resulting sub-sampling has a uniform effect on the whole parameter space. Finally, we show numerically that the combination of these two approaches yields a well performing grid-free CS pipeline
    • …
    corecore