105 research outputs found
Indoor wireless communications and applications
Chapter 3 addresses challenges in radio link and system design in indoor scenarios. Given the fact that most human activities take place in indoor environments, the need for supporting ubiquitous indoor data connectivity and location/tracking service becomes even more important than in the previous decades. Specific technical challenges addressed in this section are(i), modelling complex indoor radio channels for effective antenna deployment, (ii), potential of millimeter-wave (mm-wave) radios for supporting higher data rates, and (iii), feasible indoor localisation and tracking techniques, which are summarised in three dedicated sections of this chapter
Three more Decades in Array Signal Processing Research: An Optimization and Structure Exploitation Perspective
The signal processing community currently witnesses the emergence of sensor
array processing and Direction-of-Arrival (DoA) estimation in various modern
applications, such as automotive radar, mobile user and millimeter wave indoor
localization, drone surveillance, as well as in new paradigms, such as joint
sensing and communication in future wireless systems. This trend is further
enhanced by technology leaps and availability of powerful and affordable
multi-antenna hardware platforms. The history of advances in super resolution
DoA estimation techniques is long, starting from the early parametric
multi-source methods such as the computationally expensive maximum likelihood
(ML) techniques to the early subspace-based techniques such as Pisarenko and
MUSIC. Inspired by the seminal review paper Two Decades of Array Signal
Processing Research: The Parametric Approach by Krim and Viberg published in
the IEEE Signal Processing Magazine, we are looking back at another three
decades in Array Signal Processing Research under the classical narrowband
array processing model based on second order statistics. We revisit major
trends in the field and retell the story of array signal processing from a
modern optimization and structure exploitation perspective. In our overview,
through prominent examples, we illustrate how different DoA estimation methods
can be cast as optimization problems with side constraints originating from
prior knowledge regarding the structure of the measurement system. Due to space
limitations, our review of the DoA estimation research in the past three
decades is by no means complete. For didactic reasons, we mainly focus on
developments in the field that easily relate the traditional multi-source
estimation criteria and choose simple illustrative examples.Comment: 16 pages, 8 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
Bayesian Framework for Sparse Vector Recovery and Parameter Bounds with Application to Compressive Sensing
abstract: Signal compressed using classical compression methods can be acquired using brute force (i.e. searching for non-zero entries in component-wise). However, sparse solutions require combinatorial searches of high computations. In this thesis, instead, two Bayesian approaches are considered to recover a sparse vector from underdetermined noisy measurements. The first is constructed using a Bernoulli-Gaussian (BG) prior distribution and is assumed to be the true generative model. The second is constructed using a Gamma-Normal (GN) prior distribution and is, therefore, a different (i.e. misspecified) model. To estimate the posterior distribution for the correctly specified scenario, an algorithm based on generalized approximated message passing (GAMP) is constructed, while an algorithm based on sparse Bayesian learning (SBL) is used for the misspecified scenario. Recovering sparse signal using Bayesian framework is one class of algorithms to solve the sparse problem. All classes of algorithms aim to get around the high computations associated with the combinatorial searches. Compressive sensing (CS) is a widely-used terminology attributed to optimize the sparse problem and its applications. Applications such as magnetic resonance imaging (MRI), image acquisition in radar imaging, and facial recognition. In CS literature, the target vector can be recovered either by optimizing an objective function using point estimation, or recovering a distribution of the sparse vector using Bayesian estimation. Although Bayesian framework provides an extra degree of freedom to assume a distribution that is directly applicable to the problem of interest, it is hard to find a theoretical guarantee of convergence. This limitation has shifted some of researches to use a non-Bayesian framework. This thesis tries to close this gab by proposing a Bayesian framework with a suggested theoretical bound for the assumed, not necessarily correct, distribution. In the simulation study, a general lower Bayesian Cram\'er-Rao bound (BCRB) bound is extracted along with misspecified Bayesian Cram\'er-Rao bound (MBCRB) for GN model. Both bounds are validated using mean square error (MSE) performances of the aforementioned algorithms. Also, a quantification of the performance in terms of gains versus losses is introduced as one main finding of this report.Dissertation/ThesisMasters Thesis Computer Engineering 201
Biological versus Subspace Methods in Sound Localization
Sound localization is determining the location of sound sources usingthe measurements of the signals received by an array ofsensors. Humans and animals possess the natural ability of localizingsound. Researchers have tried to model nature's way of solvingthis problem and have come up with different methods based on variousneuro-physiological studies. Such methods arecalled biological methods. On the other hand, there is another community ofresearchers who has looked at this problem from pure signalprocessing point of view. Among the more popular methods for solvingthis problem using signal processing techniques are the subspacemethods. In this thesis, a comparative study is done betweenbiological methods and subspace methods. Further, an attempt hasbeen made to incorporate the notion of head-related transfer functionin the modeling of subspace methods. The implementationof a biological localization algorithm on a DSP board is also presented
Mining Explainable Predictive Features for Water Quality Management
With water quality management processes, identifying and interpreting
relationships between features, such as location and weather variable tuples,
and water quality variables, such as levels of bacteria, is key to gaining
insights and identifying areas where interventions should be made. There is a
need for a search process to identify the locations and types of phenomena that
are influencing water quality and a need to explain how the quality is being
affected and which factors are most relevant. This paper addresses both of
these issues. A process is developed for collecting data for features that
represent a variety of variables over a spatial region and which are used for
training models and inference. An analysis of the performance of the features
is undertaken using the models and Shapley values. Shapley values originated in
cooperative game theory and can be used to aid in the interpretation of machine
learning results. Evaluations are performed using several machine learning
algorithms and water quality data from the Dublin Grand Canal basin
Causal Discovery of Photonic Bell Experiments
A causal understanding of a physical theory is vital. They provide profound insights into the implications of the theory and contain the information required to manipulate, not only predict, our surroundings. Unfortunately, one of the most broadly used and successful theories, quantum theory, continues to evade a satisfactory causal description. The progress is hindered by the difficulty of faithfully testing causal explanations in an experimental setting. This thesis presents a novel causal discovery algorithm which allows a direct comparison of a wide variety of causal explanations for experimental data. They include causal influences both classical and quantum mechanical in nature. First we provide relevant background information, predominately on quantum mechanics, quantum optics and statistical inference. Next, we review the framework of classical causality and the connection between a causal assumption and statistical model. We then present a novel causal discovery algorithm for noisy experimental data. Finally, we perform two Bell experiments and apply the newly developed algorithm on the resulting data.
The causal discovery algorithm operates on observational data without any interven- tions required. It utilizes the concept of predictive accuracy to assign a score to each causal explanation. This allows the simultaneous consideration of classical and quantum causal theories. In addition, this approach allows the identification of overly complex explanations as these perform poorly with respect to this criterion.
Both experiments are implemented using quantum optics. The first Bell experiment has a near maximally entangled shared resource state while the second has a separable resource state. The results indicate that a quantum local causal explanation bests describes the first experiment, whereas a classical local causal explanation is preferred for the second. A super-luminal or super-deterministic theory are sub-optimal for both
- …