31 research outputs found

    Cramer-Rao bounds for rate-constrained distributed time-delay estimation

    Get PDF
    This paper investigates time-delay estimation for acoustic source localization in a distributed microphone array. The microphones are assumed to be part of a wireless sensor network, with a constraint on the number of bits that may be exchanged between the sensors. Consequently, at the fusion center, time-delay estimation needs to be performed using quantized signals. In this paper, the relation between the communication bit-rate and the Cramer-Rao lower bound (CRLB) on the variance of the time-delay estimation error is explored. The minimum bit-rate required to ensure that the CRLB is attained is also derived

    Soft information for localization-of-things

    Get PDF
    Location awareness is vital for emerging Internetof- Things applications and opens a new era for Localizationof- Things. This paper first reviews the classical localization techniques based on single-value metrics, such as range and angle estimates, and on fixed measurement models, such as Gaussian distributions with mean equal to the true value of the metric. Then, it presents a new localization approach based on soft information (SI) extracted from intra- and inter-node measurements, as well as from contextual data. In particular, efficient techniques for learning and fusing different kinds of SI are described. Case studies are presented for two scenarios in which sensing measurements are based on: 1) noisy features and non-line-of-sight detector outputs and 2) IEEE 802.15.4a standard. The results show that SI-based localization is highly efficient, can significantly outperform classical techniques, and provides robustness to harsh propagation conditions.RYC-2016-1938

    Performance Analysis of Fingerprint-Based Indoor Localization

    Get PDF
    Fingerprint-based indoor localization holds great potential for the Internet of Things. Despite numerous studies focusing on its algorithmic and practical aspects, a notable gap exists in theoretical performance analysis in this domain. This paper aims to bridge this gap by deriving several lower bounds and approximations of mean square error (MSE) for fingerprint-based localization. These analyses offer different complexity and accuracy trade-offs. We derive the equivalent Fisher information matrix and its decomposed form based on a wireless propagation model, thus obtaining the Cramér-Rao bound (CRB). By approximating the Fisher information provided by constraint knowledge, we develop a constraint-aware CRB. To more accurately characterize nonlinear transformation and constraint information, we introduce the Ziv-Zakai bound (ZZB) and modify it for adapt deterministic parameters. The Gauss–Legendre quadrature method and the trust-region reflective algorithm are employed to make the calculation of ZZB tractable. We introduce a tighter extrapolated ZZB by fitting the quadrature function outside the well-defined domain based on the Q-function. For the constrained maximum likelihood estimator, an approximate MSE expression, which can characterize map constraints, is also developed. The simulation and experimental results validate the effectiveness of the proposed bounds and approximate MSE

    Wireless Sensor Data Transport, Aggregation and Security

    Get PDF
    abstract: Wireless sensor networks (WSN) and the communication and the security therein have been gaining further prominence in the tech-industry recently, with the emergence of the so called Internet of Things (IoT). The steps from acquiring data and making a reactive decision base on the acquired sensor measurements are complex and requires careful execution of several steps. In many of these steps there are still technological gaps to fill that are due to the fact that several primitives that are desirable in a sensor network environment are bolt on the networks as application layer functionalities, rather than built in them. For several important functionalities that are at the core of IoT architectures we have developed a solution that is analyzed and discussed in the following chapters. The chain of steps from the acquisition of sensor samples until these samples reach a control center or the cloud where the data analytics are performed, starts with the acquisition of the sensor measurements at the correct time and, importantly, synchronously among all sensors deployed. This synchronization has to be network wide, including both the wired core network as well as the wireless edge devices. This thesis studies a decentralized and lightweight solution to synchronize and schedule IoT devices over wireless and wired networks adaptively, with very simple local signaling. Furthermore, measurement results have to be transported and aggregated over the same interface, requiring clever coordination among all nodes, as network resources are shared, keeping scalability and fail-safe operation in mind. Furthermore ensuring the integrity of measurements is a complicated task. On the one hand Cryptography can shield the network from outside attackers and therefore is the first step to take, but due to the volume of sensors must rely on an automated key distribution mechanism. On the other hand cryptography does not protect against exposed keys or inside attackers. One however can exploit statistical properties to detect and identify nodes that send false information and exclude these attacker nodes from the network to avoid data manipulation. Furthermore, if data is supplied by a third party, one can apply automated trust metric for each individual data source to define which data to accept and consider for mentioned statistical tests in the first place. Monitoring the cyber and physical activities of an IoT infrastructure in concert is another topic that is investigated in this thesis.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Autonomous Swarm Navigation

    Get PDF
    Robotic swarm systems attract increasing attention in a wide variety of applications, where a multitude of self-organized robotic entities collectively accomplish sensing or exploration tasks. Compared to a single robot, a swarm system offers advantages in terms of exploration speed, robustness against single point of failures, and collective observations of spatio-temporal processes. Autonomous swarm navigation, including swarm self-localization, the localization of external sources, and swarm control, is essential for the success of an autonomous swarm application. However, as a newly emerging technology, a thorough study of autonomous swarm navigation is still missing. In this thesis, we systematically study swarm navigation systems, particularly emphasizing on their collective performance. The general theory of swarm navigation as well as an in-depth study on a specific swarm navigation system proposed for future Mars exploration missions are covered. Concerning swarm localization, a decentralized algorithm is proposed, which achieves a near-optimal performance with low complexity for a dense swarm network. Regarding swarm control, a position-aware swarm control concept is proposed. The swarm is aware of not only the position estimates and the estimation uncertainties of itself and the sources, but also the potential motions to enrich position information. As a result, the swarm actively adapts its formation to improve localization performance, without losing track of other objectives, such as goal approaching and collision avoidance. The autonomous swarm navigation concept described in this thesis is verified for a specific Mars swarm exploration system. More importantly, this concept is generally adaptable to an extensive range of swarm applications

    A Survey on Fundamental Limits of Integrated Sensing and Communication

    Get PDF
    The integrated sensing and communication (ISAC), in which the sensing and communication share the same frequency band and hardware, has emerged as a key technology in future wireless systems due to two main reasons. First, many important application scenarios in fifth generation (5G) and beyond, such as autonomous vehicles, Wi-Fi sensing and extended reality, requires both high-performance sensing and wireless communications. Second, with millimeter wave and massive multiple-input multiple-output (MIMO) technologies widely employed in 5G and beyond, the future communication signals tend to have high-resolution in both time and angular domain, opening up the possibility for ISAC. As such, ISAC has attracted tremendous research interest and attentions in both academia and industry. Early works on ISAC have been focused on the design, analysis and optimization of practical ISAC technologies for various ISAC systems. While this line of works are necessary, it is equally important to study the fundamental limits of ISAC in order to understand the gap between the current state-of-the-art technologies and the performance limits, and provide useful insights and guidance for the development of better ISAC technologies that can approach the performance limits. In this paper, we aim to provide a comprehensive survey for the current research progress on the fundamental limits of ISAC. Particularly, we first propose a systematic classification method for both traditional radio sensing (such as radar sensing and wireless localization) and ISAC so that they can be naturally incorporated into a unified framework. Then we summarize the major performance metrics and bounds used in sensing, communications and ISAC, respectively. After that, we present the current research progresses on fundamental limits of each class of the traditional sensing and ISAC systems. Finally, the open problems and future research directions are discussed

    Compressive sensor networks : fundamental limits and algorithms

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 85-92).Compressed sensing is a non-adaptive compression method that takes advantage of natural sparsity at the input and is fast gaining relevance to both researchers and engineers for its universality and applicability. First developed by Candis et al., the subject has seen a surge of high-quality results both in its theory and applications. This thesis extends compressed sensing ideas to sensor networks and other bandwidth-constrained communication systems. In particular, we explore the limits of performance of compressive sensor networks in relation to fundamental operations such as quantization and parameter estimation. Since compressed sensing is originally formulated as a real-valued problem, quantization of the measurements is a very natural extension. Although several researchers have proposed modified reconstruction methods that mitigate quantization noise for a fixed quantizer, the optimal design of such quantizers is still unknown. We propose to find the optimal quantizer in terms of minimizing quantization error by using recent results in functional scalar quantization. The best quantizer in this case is not the optimal design for the measurements themselves but rather is reweighted by a factor we call the sensitivity. Numerical results demonstrate a constant-factor improvement in the fixed-rate case. Parameter estimation is an important goal of many sensing systems since users often care about some function of the data rather than the data itself.(cont.) Thus, it is of interest to see how efficiently nodes using compressed sensing can estimate a parameter, and if the measurements scalings can be less restrictive than the bounds in the literature. We explore this problem for time difference and angle of arrival, two common methods for source geolocation. We first derive Cramer-Rao lower bounds for both parameters and show that a practical block-OMP estimator can be relatively efficient for signal reconstruction. However, there is a large gap between theory and practice for time difference or angle of arrival estimation, which demonstrates the CRB to be an optimistic lower bound for nonlinear estimation. We also find scaling laws 'for time difference estimation in the discrete case. This is strongly related to partial support recovery, and we derive some new sufficient conditions that show a very simple reconstruction algorithm can achieve substantially better scaling than full support recovery suggests is possible.by John Zheng Sun.S.M

    Source localization via time difference of arrival

    Get PDF
    Accurate localization of a signal source, based on the signals collected by a number of receiving sensors deployed in the source surrounding area is a problem of interest in various fields. This dissertation aims at exploring different techniques to improve the localization accuracy of non-cooperative sources, i.e., sources for which the specific transmitted symbols and the time of the transmitted signal are unknown to the receiving sensors. With the localization of non-cooperative sources, time difference of arrival (TDOA) of the signals received at pairs of sensors is typically employed. A two-stage localization method in multipath environments is proposed. During the first stage, TDOA of the signals received at pairs of sensors is estimated. In the second stage, the actual location is computed from the TDOA estimates. This later stage is referred to as hyperbolic localization and it generally involves a non-convex optimization. For the first stage, a TDOA estimation method that exploits the sparsity of multipath channels is proposed. This is formulated as an f1-regularization problem, where the f1-norm is used as channel sparsity constraint. For the second stage, three methods are proposed to offer high accuracy at different computational costs. The first method takes a semi-definite relaxation (SDR) approach to relax the hyperbolic localization to a convex optimization. The second method follows a linearized formulation of the problem and seeks a biased estimate of improved accuracy. A third method is proposed to exploit the source sparsity. With this, the hyperbolic localization is formulated as an an f1-regularization problem, where the f1-norm is used as source sparsity constraint. The proposed methods compare favorably to other existing methods, each of them having its own advantages. The SDR method has the advantage of simplicity and low computational cost. The second method may perform better than the SDR approach in some situations, but at the price of higher computational cost. The l1-regularization may outperform the first two methods, but is sensitive to the choice of a regularization parameter. The proposed two-stage localization approach is shown to deliver higher accuracy and robustness to noise, compared to existing TDOA localization methods. A single-stage source localization method is explored. The approach is coherent in the sense that, in addition to the TDOA information, it utilizes the relative carrier phases of the received signals among pairs of sensors. A location estimator is constructed based on a maximum likelihood metric. The potential of accuracy improvement by the coherent approach is shown through the Cramer Rao lower bound (CRB). However, the technique has to contend with high peak sidelobes in the localization metric, especially at low signal-to-noise ratio (SNR). Employing a small antenna array at each sensor is shown to lower the sidelobes level in the localization metric. Finally, the performance of time delay and amplitude estimation from samples of the received signal taken at rates lower than the conventional Nyquist rate is evaluated. To this end, a CRB is developed and its variation with system parameters is analyzed. It is shown that while with noiseless low rate sampling there is no estimation accuracy loss compared to Nyquist sampling, in the presence of additive noise the performance degrades significantly. However, increasing the low sampling rate by a small factor leads to significant performance improvement, especially for time delay estimation

    Signal Detection and Estimation for MIMO radar and Network Time Synchronization

    Get PDF
    The theory of signal detection and estimation concerns the recovery of useful information from signals corrupted by random perturbations. This dissertation discusses the application of signal detection and estimation principles to two problems of significant practical interest: MIMO (multiple-input multiple output) radar, and time synchronization over packet switched networks. Under the first topic, we study the extension of several conventional radar analysis techniques to recently developed MIMO radars. Under the second topic, we develop new estimation techniques to improve the performance of widely used packet-based time synchronization algorithms. The ambiguity function is a popular mathematical tool for designing and optimizing the performance of radar detectors. Motivated by Neyman-Pearson testing principles, an alternative definition of the ambiguity function is proposed under the first topic. This definition directly associates with each pair of true and assumed target parameters the probability that the radar will declare a target present. We demonstrate that the new definition is better suited for the analysis of MIMO radars that perform non-coherent processing, while being equivalent to the original ambiguity function when applied to conventional radars. Based on the nature of antenna placements, transmit waveforms and the observed clutter and noise, several types of MIMO radar detectors have been individually studied in literature. A second investigation into MIMO radar presents a general method to model and analyze the detection performance of such systems. We develop closed-form expressions for a Neyman-Pearson optimum detector that is valid for a wide class of radars. Further, general closed-form expressions for the detector SNR, another tool used to quantify radar performance, are derived. Theoretical and numerical results demonstrating the value of the proposed techniques to optimize and predict the performance of arbitrary radar configurations are presented.There has been renewed recent interest in the application of packet-based time synchronization algorithms such as the IEEE 1588 Precision Time Protocol (PTP), to meet challenges posed by next-generation mobile telecommunication networks. In packet based time synchronization protocols, clock phase offsets are determined via two-way message exchanges between a master and a slave. Since the end-to-end delays in packet networks are inherently stochastic in nature, the recovery of phase offsets from message exchanges must be treated as a statistical estimation problem. While many simple intuitively motivated estimators for this problem exist in the literature, in the second part of this dissertation we use estimation theoretic principles to develop new estimators that offer significant performance benefits. To this end, we first describe new lower bounds on the error variance of phase offset estimation schemes. These bounds are obtained by re-deriving two Bayesian estimation bounds, namely the Ziv-Zakai and Weiss-Weinstien bounds, for use under a non-Bayesian formulation. Next, we describe new minimax estimators for the problem of phase offset estimation, that are optimum in terms of minimizing the maximum mean squared error over all possible values of the unknown parameters.Minimax estimators that utilize information from past timestamps to improve accuracy are also introduced. These minimax estimators provide fundamental limits on the performance of phase offset estimation schemes.Finally, a restricted class of estimators referred to as L-estimators are considered, that are linear functions of order statistics. The problem of designing optimum L-estimators is studied under several hitherto unconsidered criteria of optimality. We address the case where the queuing delay distributions are fully known, as well as the case where network model uncertainty exists.Optimum L-estimators that utilize information from past observation windows to improve performance are also described.Simulation results indicate that significant performance gains over conventional estimators can be obtained via the proposed optimum processing techniques
    corecore