890 research outputs found

    An objective based classification of aggregation techniques for wireless sensor networks

    No full text
    Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented

    Quantized Compressed Sensing with Score-Based Generative Models

    Full text link
    We consider the general problem of recovering a high-dimensional signal from noisy quantized measurements. Quantization, especially coarse quantization such as 1-bit sign measurements, leads to severe information loss and thus a good prior knowledge of the unknown signal is helpful for accurate recovery. Motivated by the power of score-based generative models (SGM, also known as diffusion models) in capturing the rich structure of natural signals beyond simple sparsity, we propose an unsupervised data-driven approach called quantized compressed sensing with SGM (QCS-SGM), where the prior distribution is modeled by a pre-trained SGM. To perform posterior sampling, an annealed pseudo-likelihood score called {\textit{noise perturbed pseudo-likelihood score}} is introduced and combined with the prior score of SGM. The proposed QCS-SGM applies to an arbitrary number of quantization bits. Experiments on a variety of baseline datasets demonstrate that the proposed QCS-SGM significantly outperforms existing state-of-the-art algorithms by a large margin for both in-distribution and out-of-distribution samples. Moreover, as a posterior sampling method, QCS-SGM can be easily used to obtain confidence intervals or uncertainty estimates of the reconstructed results. The code is available at https://github.com/mengxiangming/QCS-SGM.Comment: 29 pages, code available at https://github.com/mengxiangming/QCS-SG

    Distributed Inference and Learning with Byzantine Data

    Get PDF
    We are living in an increasingly networked world with sensing networks of varying shapes and sizes: the network often comprises of several tiny devices (or nodes) communicating with each other via different topologies. To make the problem even more complicated, the nodes in the network can be unreliable due to a variety of reasons: noise, faults and attacks, thus, providing corrupted data. Although the area of statistical inference has been an active area of research in the past, distributed learning and inference in a networked setup with potentially unreliable components has only gained attention recently. The emergence of big and dirty data era demands new distributed learning and inference solutions to tackle the problem of inference with corrupted data. Distributed inference networks (DINs) consist of a group of networked entities which acquire observations regarding a phenomenon of interest (POI), collaborate with other entities in the network by sharing their inference via different topologies to make a global inference. The central goal of this thesis is to analyze the effect of corrupted (or falsified) data on the inference performance of DINs and design robust strategies to ensure reliable overall performance for several practical network architectures. Specifically, the inference (or learning) process can be that of detection or estimation or classification, and the topology of the system can be parallel, hierarchical or fully decentralized (peer to peer). Note that, the corrupted data model may seem similar to the scenario where local decisions are transmitted over a Binary Symmetric Channel (BSC) with a certain cross over probability, however, there are fundamental differences. Over the last three decades, research community has extensively studied the impact of transmission channels or faults on the distributed detection system and related problems due to its importance in several applications. However, corrupted (Byzantine) data models considered in this thesis, are philosophically different from the BSC or the faulty sensor cases. Byzantines are intentional and intelligent, therefore, they can optimize over the data corruption parameters. Thus, in contrast to channel aware detection, both the FC and the Byzantines can optimize their utility by choosing their actions based on the knowledge of their opponent’s behavior. Study of these practically motivated scenarios in the presence of Byzantines is of utmost importance, and is missing from the channel aware detection and fault tolerant detection literature. This thesis advances the distributed inference literature by providing fundamental limits of distributed inference with Byzantine data and provides optimal counter-measures (using the insights provided by these fundamental limits) from a network designer’s perspective. Note that, the analysis of problems related to strategical interaction between Byzantines and network designed is very challenging (NP-hard is many cases). However, we show that by utilizing the properties of the network architecture, efficient solutions can be obtained. Specifically, we found that several problems related to the design of optimal counter-measures in the inference context are, in fact, special cases of these NP-hard problems which can be solved in polynomial time. First, we consider the problem of distributed Bayesian detection in the presence of data falsification (or Byzantine) attacks in the parallel topology. Byzantines considered in this thesis are those nodes that are compromised and reprogrammed by an adversary to transmit false information to a centralized fusion center (FC) to degrade detection performance. We show that above a certain fraction of Byzantine attackers in the network, the detection scheme becomes completely incapable (or blind) of utilizing the sensor data for detection. When the fraction of Byzantines is not sufficient to blind the FC, we also provide closed form expressions for the optimal attacking strategies for the Byzantines that most degrade the detection performance. Optimal attacking strategies in certain cases have the minimax property and, therefore, the knowledge of these strategies has practical significance and can be used to implement a robust detector at the FC. In several practical situations, parallel topology cannot be implemented due to limiting factors, such as, the FC being outside the communication range of the nodes and limited energy budget of the nodes. In such scenarios, a multi-hop network is employed, where nodes are organized hierarchically into multiple levels (tree networks). Next, we study the problem of distributed inference in tree topologies in the presence of Byzantines under several practical scenarios. We analytically characterize the effect of Byzantines on the inference performance of the system. We also look at the possible counter-measures from the FC’s perspective to protect the network from these Byzantines. These counter-measures are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. For scenarios where this is not possible, Byzantine tolerant schemes, which use game theory and error-correcting codes, are developed that tolerate the effect of Byzantines while maintaining a reasonably good inference performance in the network. Going a step further, we also consider scenarios where a centralized FC is not available. In such scenarios, a solution is to employ detection approaches which are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors. For such networks, we analytically characterize the negative effect of Byzantines on the steady-state and transient detection performance of conventional consensus-based detection schemes. To avoid performance deterioration, we propose a distributed weighted average consensus algorithm that is robust to Byzantine attacks. Next, we exploit the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, we propose learning based techniques to enable an adaptive design of the local fusion or update rules. The above considerations highlight the negative effect of the corrupted data on the inference performance. However, it is possible for a system designer to utilize the corrupted data for network’s benefit. Finally, we consider the problem of detecting a high dimensional signal based on compressed measurements with secrecy guarantees. We consider a scenario where the network operates in the presence of an eavesdropper who wants to discover the state of the nature being monitored by the system. To keep the data secret from the eavesdropper, we propose to use cooperating trustworthy nodes that assist the FC by injecting corrupted data in the system to deceive the eavesdropper. We also design the system by determining the optimal values of parameters which maximize the detection performance at the FC while ensuring perfect secrecy at the eavesdropper

    ROUTING TOPOLOGY RECOVERY FOR WIRELESS SENSOR NETWORKS

    Get PDF
    Liu, Rui Ph.D., Purdue University, December 2014. Routing Topology Recovery for Wireless Sensor Networks. Major Professor: Yao Liang

    Inference for Generalized Linear Models via Alternating Directions and Bethe Free Energy Minimization

    Full text link
    Generalized Linear Models (GLMs), where a random vector x\mathbf{x} is observed through a noisy, possibly nonlinear, function of a linear transform z=Ax\mathbf{z}=\mathbf{Ax} arise in a range of applications in nonlinear filtering and regression. Approximate Message Passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A\mathbf{A}. However, the algorithms can easily diverge for general A\mathbf{A}. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe Free Energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the Alternating Direction Method of Multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minima of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well

    A Probabilistic-Based Approach to Monitoring Tool Wear State and Assessing Its Effect on Workpiece Quality in Nickel-Based Alloys

    Get PDF
    The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the mechanisms of tool failure were first identified and, based on the rapid catastrophic failure of the tool, a Bayesian inference method (i.e., Markov Chain Monte Carlo, MCMC) was used for parameter calibration of tool wear using a power mechanistic model. The calibrated model was then used in the state space probabilistic framework of a Kalman filter to estimate the tool flank wear. Furthermore, an on-machine laser measuring system was utilized and fused into the Kalman filter to improve the estimation accuracy. In the turning operation the behavior of progressive wear was investigated as well. Due to the nonlinear nature of wear in turning, an extended Kalman filter was designed for tracking progressive wear, and the results of the probabilistic-based method were compared with a deterministic technique, where significant improvement (more than 60% increase in estimation accuracy) was achieved. To fulfill the second objective of this research in understanding the underlying effects of wear on part quality in cutting nickel-based superalloys, a comprehensive study on surface roughness, dimensional integrity and residual stress was conducted. The estimated results derived from a probabilistic filter were used for finding the proper correlations between wear, surface roughness and dimensional integrity, along with a finite element simulation for predicting the residual stress profile for sharp and worn cutting tool conditions. The output of this research provides the essential information on condition monitoring of the tool and its effects on product quality. The low-cost Hall effect sensor used in this work to capture spindle power in the context of the stochastic filter can effectively estimate tool wear in both milling and turning operations, while the estimated wear can be used to generate knowledge of the state of workpiece surface integrity. Therefore the true functionality and efficiency of the tool in superalloy machining can be evaluated without additional high-cost sensing
    • …
    corecore