576 research outputs found
Recommended from our members
Cooperative Sequential Hypothesis Testing in Multi-Agent Systems
Since the sequential inference framework determines the number of total samples in real-time based on the history data, it yields quicker decision compared to its fixed-sample-size counterpart, provided the appropriate early termination rule. This advantage is particularly appealing in the system where data is acquired in sequence, and both the decision accuracy and latency are of primary interests. Meanwhile, the Internet of Things (IoT) technology has created all types of connected devices, which can potentially enhance the inference performance by providing information diversity. For instance, smart home network deploys multiple sensors to perform the climate control, security surveillance, and personal assistance. Therefore, it has become highly desirable to pursue the solutions that can efficiently integrate the classic sequential inference methodologies into the networked multi-agent systems. In brief, this thesis investigates the sequential hypothesis testing problem in multi-agent networks, aiming to overcome the constraints of communication bandwidth, energy capacity, and network topology so that the networked system can perform sequential test cooperatively to its full potential.
The multi-agent networks are generally categorized into two main types. The first one features a hierarchical structure, where the agents transmit messages based on their observations to a fusion center that performs the data fusion and sequential inference on behalf of the network. One such example is the network formed by wearable devices connected with a smartphone. The central challenges in the hierarchical network arise from the instantaneous transmission of the distributed data to the fusion center, which is constrained by the battery capacity and the communication bandwidth in practice. Therefore, the first part of this thesis is dedicated to address
these two constraints for the hierarchical network. In specific, aiming to preserve the agent energy, Chapter 2 devises the optimal sequential test that selects the "most informative" agent online at each sampling step while leaving others in idle status. To overcome the communication bottleneck, Chapter 3 proposes a scheme that allows distributed agents to send only one-bit messages asynchronously to the fusion center without compromising the performance. In contrast, the second type of networks does not assume the presence of a fusion center, and each agent performs the sequential test based on its own samples together with the messages shared by its neighbours. The communication links can be represented by an undirected graph. A variety of applications conform to such a distributed structure, for instance, the social networks that connect individuals through online friendship and the vehicular network formed by connected cars. However, the distributed network is prone to sub-optimal performance since each agent can only access the information from its local neighborhood. Hence the second part of this thesis mainly focuses on optimizing the distributed performance through local
message exchanges. In Chapter 4, we put forward a distributed sequential test based on consensus algorithm, where agents exchange and aggregate real-valued local statistics with neighbours at every sampling step. In order to further lower the communication overhead, Chapter 5 develops a distributed sequential test that only requires the exchange of quantized messages (i.e., integers) between agents. The cluster-based network, which is a hybrid of the hierarchical and distributed networks, is also investigated in Chapter 5
Heterogeneous Sensor Signal Processing for Inference with Nonlinear Dependence
Inferring events of interest by fusing data from multiple heterogeneous sources has been an interesting and important topic in recent years. Several issues related to inference using heterogeneous data with complex and nonlinear dependence are investigated in this dissertation. We apply copula theory to characterize the dependence among heterogeneous data.
In centralized detection, where sensor observations are available at the fusion center (FC), we study copula-based fusion. We design detection algorithms based on sample-wise copula selection and mixture of copulas model in different scenarios of the true dependence. The proposed approaches are theoretically justified and perform well when applied to fuse acoustic and seismic sensor data for personnel detection. Besides traditional sensors, the access to the massive amount of social media data provides a unique opportunity for extracting information about unfolding events. We further study how sensor networks and social media complement each other in facilitating the data-to-decision making process. We propose a copula-based joint characterization of multiple dependent time series from sensors and social media. As a proof-of-concept, this model is applied to the fusion of Google Trends (GT) data and stock/flu data for prediction, where the stock/flu data serves as a surrogate for sensor data.
In energy constrained networks, local observations are compressed before they are transmitted to the FC. In these cases, conditional dependence and heterogeneity complicate the system design particularly. We consider the classification of discrete random signals in Wireless Sensor Networks (WSNs), where, for communication efficiency, only local decisions are transmitted. We derive the necessary conditions for the optimal decision rules at the sensors and the FC by introducing a hidden random variable. An iterative algorithm is designed to search for the optimal decision rules. Its convergence and asymptotical optimality are also proved. The performance of the proposed scheme is illustrated for the distributed Automatic Modulation Classification (AMC) problem. Censoring is another communication efficient strategy, in which sensors transmit only informative observations to the FC, and censor those deemed uninformative . We design the detectors that take into account the spatial dependence among observations. Fusion rules for censored data are proposed with continuous and discrete local messages, respectively. Their computationally efficient counterparts based on the key idea of injecting controlled noise at the FC before fusion are also investigated.
In this thesis, with heterogeneous and dependent sensor observations, we consider not only inference in parallel frameworks but also the problem of collaborative inference where collaboration exists among local sensors. Each sensor forms coalition with other sensors and shares information within the coalition, to maximize its inference performance. The collaboration strategy is investigated under a communication constraint. To characterize the influence of inter-sensor dependence on inference performance and thus collaboration strategy, we quantify the gain and loss in forming a coalition by introducing the copula-based definitions of diversity gain and redundancy loss for both estimation and detection problems. A coalition formation game is proposed for the distributed inference problem, through which the information contained in the inter-sensor dependence is fully explored and utilized for improved inference performance
Recommended from our members
Sequential Statistical Signal Processing with Applications to Distributed Systems
Detection and estimation, two classical statistical signal processing problems with wellestablished
theories, are traditionally studied under the fixed-sample-size and centralized
setups, e.g., Neyman-Pearson target detection, and Bayesian parameter estimation. Recently,
they appear in more challenging setups with stringent constraints on critical resources,
e.g., time, energy, and bandwidth, in emerging technologies, such as wireless sensor
networks, cognitive radio, smart grid, cyber-physical systems (CPS), internet of things
(IoT), and networked control systems. These emerging systems have applications in a wide
range of areas, such as communications, energy, the military, transportation, health care,
and infrastructure.
Sequential (i.e., online) methods suit much better to the ever-increasing demand on
time-efficiency, and latency constraints than the conventional fixed-sample-size (i.e., offline)
methods. Furthermore, as a result of decreasing device sizes and tendency to connect
more and more devices, there are stringent energy and bandwidth constraints on devices
(i.e., nodes) in a distributed system (i.e., network), requiring decentralized operation with
low transmission rates. Hence, for statistical inference (e.g., detection and/or estimation)
problems in distributed systems, today's challenge is achieving high performance (e.g., time
efficiency) while satisfying resource (e.g., energy and bandwidth) constraints.
In this thesis, we address this challenge by (i) first finding optimum (centralized) sequential
schemes for detection, estimation, and joint detection and estimation if not available in
the literature, (ii) and then developing their asymptotically optimal decentralized versions
through an adaptive non-uniform sampling technique called level-triggered sampling. We
propose and rigorously analyze decentralized detection, estimation, and joint detection and
estimation schemes based on level-triggered sampling, resulting in a systematic theory of
event-based statistical signal processing. We also show both analytically and numerically
that the proposed schemes significantly outperform their counterparts based on conventional
uniform sampling in terms of time efficiency. Moreover, they are compatible with the
existing hardware as they work with discrete-time observations produced by conventional
A/D converters.
We apply the developed schemes to several problems, namely spectrum sensing and
dynamic spectrum access in cognitive radio, state estimation and outage detection in smart
grid, and target detection in multi-input multi-output (MIMO) wireless sensor networks
One-bit Compressed Sensing in the Presence of Noise
Many modern real-world systems generate large amounts of high-dimensional data stressing the available computing and signal processing systems. In resource-constrained settings, it is desirable to process, store and transmit as little amount of data as possible. It has been shown that one can obtain acceptable performance for tasks such as inference and reconstruction using fewer bits of data by exploiting low-dimensional structures on data such as sparsity. This dissertation investigates the signal acquisition paradigm known as one-bit compressed sensing (one-bit CS) for signal reconstruction and parameter estimation.
We first consider the problem of joint sparse support estimation with one-bit measurements in a distributed setting. Each node observes sparse signals with the same but unknown support. The goal is to minimize the probability of error of support estimation. First, we study the performance of maximum likelihood (ML) estimation of the support set from one-bit compressed measurements when all these measurements are available at the fusion center. We provide a lower bound on the number of one-bit measurements required per node for vanishing probability of error. Though the ML estimator is optimal, its computational complexity increases exponentially with the signal dimension. So, we propose computationally tractable algorithms in a centralized setting. Further, we extend these algorithms to a decentralized setting where each node can communicate only with its one-hop neighbors. The proposed method shows excellent estimation performance even in the presence of noise.
In the second part of the dissertation, we investigate the problem of sparse signal reconstruction from noisy one-bit compressed measurements using a signal that is statistically dependent on the compressed signal as an aid. We refer to this signal as side-information. We consider a generalized measurement model of one-bit CS where noise is assumed to be added at two stages of the measurement process- a) before quantizationand b) after quantization. We model the noise before quantization as additive white Gaussian noise and the noise after quantization as a sign-flip noise generated from a Bernoulli distribution. We assume that the SI at the receiver is noisy. The noise in the SI can be either in the support or in the amplitude, or both. This nature of the noise in SI suggests that the noise has a sparse structure. We use additive independent and identically distributed Laplacian noise to model such sparse nature of the noise. In this setup, we develop tractable algorithms that approximate the minimum mean square error (MMSE) estimator of the signal. We consider the following three different SI-based scenarios:
1. The side-information is assumed to be a noisy version of the signal. The noise is independent of the signal and follows the Laplacian distribution. We do not assume any temporal dependence in the signal.2. The signal exhibits temporal dependencies between signals at the current time instant and the previous time instant. The temporal dependence is modeled using the birth-death-drift (BDD) model. The side-information is a noisy version of the previous time instant signal, which is statistically dependent on the signal as defined by the BDD model. 3. The SI available at the receiver is heterogeneous. The signal and side-information are from different modalities and may not share joint sparse representation. We assume that the SI and the sparse signal are dependent and use the Copula function to model the dependence. In each of these scenarios, we develop generalized approximate message passing-based algorithms to approximate the minimum mean square error estimate. Numerical results show the effectiveness of the proposed algorithm.
In the final part of the dissertation, we propose two one-bit compressed sensing reconstruction algorithms that use a deep neural network as a prior on the signal. In the first algorithm, we use a trained Generative model such as Generative Adversarial Networks and Variational Autoencoders as a prior. This trained network is used to reconstruct the compressed signal from one-bit measurements by searching over its range. We provide theoretical guarantees on the reconstruction accuracy and sample complexity of the presented algorithm. In the second algorithm, we investigate an untrained neural network architecture so that it acts as a good prior on natural signals such as images and audio. We formulate an optimization problem to reconstruct the signal from one-bit measurements using this untrained network. We demonstrate the superior performance of the proposed algorithms through numerical results. Further, in contrast to competing model-based algorithms, we demonstrate that the proposed algorithms estimate both direction and magnitude of the compressed signal from one-bit measurements
- …