263 research outputs found

    Multiple Hypothesis Testing Framework for Spatial Signals

    Full text link
    The problem of identifying regions of spatially interesting, different or adversarial behavior is inherent to many practical applications involving distributed multisensor systems. In this work, we develop a general framework stemming from multiple hypothesis testing to identify such regions. A discrete spatial grid is assumed for the monitored environment. The spatial grid points associated with different hypotheses are identified while controlling the false discovery rate at a pre-specified level. Measurements are acquired using a large-scale sensor network. We propose a novel, data-driven method to estimate local false discovery rates based on the spectral method of moments. Our method is agnostic to specific spatial propagation models of the underlying physical phenomenon. It relies on a broadly applicable density model for local summary statistics. In between sensors, locations are assigned to regions associated with different hypotheses based on interpolated local false discovery rates. The benefits of our method are illustrated by applications to spatially propagating radio waves.Comment: Submitted to IEEE Transactions on Signal and Information Processing over Network

    Distributed detection, localization, and estimation in time-critical wireless sensor networks

    Get PDF
    In this thesis the problem of distributed detection, localization, and estimation (DDLE) of a stationary target in a fusion center (FC) based wireless sensor network (WSN) is considered. The communication process is subject to time-critical operation, restricted power and bandwidth (BW) resources operating over a shared communication channel Buffering from Rayleigh fading and phase noise. A novel algorithm is proposed to solve the DDLE problem consisting of two dependent stages: distributed detection and distributed estimation. The WSN performs distributed detection first and based on the global detection decision the distributed estimation stage is performed. The communication between the SNs and the FC occurs over a shared channel via a slotted Aloha MAC protocol to conserve BW. In distributed detection, hard decision fusion is adopted, using the counting rule (CR), and sensor censoring in order to save power and BW. The effect of Rayleigh fading on distributed detection is also considered and accounted for by using distributed diversity combining techniques where the diversity combining is among the sensor nodes (SNs) in lieu of having the processing done at the FC. Two distributed techniques are proposed: the distributed maximum ratio combining (dMRC) and the distributed Equal Gain Combining (dEGC). Both techniques show superior detection performance when compared to conventional diversity combining procedures that take place at the FC. In distributed estimation, the segmented distributed localization and estimation (SDLE) framework is proposed. The SDLE enables efficient power and BW processing. The SOLE hinges on the idea of introducing intermediate parameters that are estimated locally by the SNs and transmitted to the FC instead of the actual measurements. This concept decouples the main problem into a simpler set of local estimation problems solved at the SNs and a global estimation problem solved at the FC. Two algorithms are proposed for solving the local problem: a nonlinear least squares (NLS) algorithm using the variable projection (VP) method and a simpler gird search (GS) method. Also, Four algorithms are proposed to solve the global problem: NLS, GS, hyperspherical intersection method (HSI), and robust hyperspherical intersection (RHSI) method. Thus, the SDLE can be solved through local and global algorithm combinations. Five combinations are tied: NLS2 (NLS-NLS), NLS-HSI, NLS-RHSI, GS2, and GS-N LS. It turns out that the last algorithm combination delivers the best localization and estimation performance. In fact , the target can be localized with less than one meter error. The SNs send their local estimates to the FC over a shared channel using the slotted-Aloha MAC protocol, which suits WSNs since it requires only one channel. However, Aloha is known for its relatively high medium access or contention delay given the medium access probability is poorly chosen. This fact significantly hinders the time-critical operation of the system. Hence, multi-packet reception (MPR) is used with slotted Aloha protocol, in which several channels are used for contention. The contention delay is analyzed for slotted Aloha with and without MPR. More specifically, the mean and variance have been analytically computed and the contention delay distribution is approximated. Having theoretical expressions for the contention delay statistics enables optimizing both the medium access probability and the number of MPR channels in order to strike a trade-off between delay performance and complexity

    Data Driven Nonparametric Detection

    Get PDF
    The major goal of signal detection is to distinguish between hypotheses about the state of events based on observations. Typically, signal detection can be categorized into centralized detection, where all observed data are available for making decision, and decentralized detection, where only quantized data from distributed sensors are forwarded to a fusion center for decision making. While these problems have been intensively studied under parametric and semi-parametric models with underlying distributions being fully or partially known, nonparametric scenarios are not well understood yet. This thesis mainly explores nonparametric models with unknown underlying distributions as well as semi-parametric models as an intermediate step to solve nonparametric problems. One major topic of this thesis is on nonparametric decentralized detection, in which the joint distribution of the state of an event and sensor observations are not known, but only some training data are available. The kernel-based nonparametric approach has been proposed by Nguyen, Wainwright and Jordan where sensors\u27 quality is treated equally. We study heterogeneous sensor networks, and propose a weighted kernel so that weight parameters are utilized to selectively incorporate sensors\u27 information into the fusion center\u27s decision rule based on quality of sensors\u27 observations. Furthermore, weight parameters also serve as sensor selection parameters with nonzero parameters corresponding to sensors being selected. Sensor selection is jointly performed with decision rules of sensors and the fusion center with the resulting optimal decision rule having only a sparse number of nonzero weight parameters. A gradient projection algorithm and a Gauss-Seidel algorithm are developed to solve the risk minimization problem, which is non-convex, and both algorithms are shown to converge to critical points. The other major topic of this thesis is composite outlier detection in centralized scenarios. The goal is to detect the existence of data streams drawn from outlying distributions among data streams drawn from a typical distribution. We study both the semi-parametric model with known typical distribution and unknown outlying distributions, and the nonparametric model with unknown typical and outlying distributions. For both models, we construct generalized likelihood ratio tests (GLRT), and show that with the knowledge of the KL divergence between the outlier and typical distributions, GLRT is exponentially consistent (i.e, the error risk function decays exponentially fast). We also show that with the knowledge of the Chernoff distance between the outlying and typical distributions, GLRT for semi-parametric model achieves the same risk decay exponent as the parametric model, and GLRT for nonparametric model achieves the same performance when the number of data streams gets asymptotically large. We further show that for both models without any knowledge about the distance between distributions, there does not exist an exponentially consistent test. However, GLRT with a diminishing threshold can still be consistent

    Resource management in sensing services with audio applications

    Get PDF
    Middleware abstractions, or services, that can bridge the gap between the increasingly pervasive sensors and the sophisticated inference applications exist, but they lack the necessary resource-awareness to support high data-rate sensing modalities such as audio/video. This work therefore investigates the resource management problem in sensing services, with application in audio sensing. First, a modular, data-centric architecture is proposed as the framework within which optimal resource management is studied. Next, the guided-processing principle is proposed to achieve optimized trade-off between resource (energy) and (inference) performance. On cascade-based systems, empirical results show that the proposed approach significantly improves the detection performance (up to 1.7x and 4x reduction in false-alarm and miss rate, respectively) for the same energy consumption, when compared to the duty-cycling approach. Furthermore, the guided-processing approach is also generalizable to graph-based systems. Resource-efficiency in the multiple-application setting is achieved through the feature-sharing principle. Once applied, the method results in a system that can achieve 9x resource saving and 1.43x improvement in detection performance in an example application. Based on the encouraging results above, a prototype audio sensing service is built for demonstration. An interference-robust audio classification technique with limited training data would prove valuable within the service, so a novel algorithm with the desired properties is proposed. The technique combines AI-gram time-frequency representation and multidimensional dynamic time warping, and it outperforms the state-of-the-art using the prominent-region-based approach across a wide range of (synthetic, both stationary and transient) interference types and signal-to-interference ratios, and also on field recordings (with areas under the receiver operating characteristic and precision-recall curves being 91% and 87%, respectively)

    Techniques for Decentralized and Dynamic Resource Allocation

    Get PDF
    abstract: This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer. The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol. The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized. The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA). The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics. While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints. The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications
    • …
    corecore