4 research outputs found

    Decentralized Sequential Composite Hypothesis Test Based on One-Bit Communication

    Full text link
    This paper considers the sequential composite hypothesis test with multiple sensors. The sensors observe random samples in parallel and communicate with a fusion center, who makes the global decision based on the sensor inputs. On one hand, in the centralized scenario, where local samples are precisely transmitted to the fusion center, the generalized sequential likelihood ratio test (GSPRT) is shown to be asymptotically optimal in terms of the expected sample size as error rates tend to zero. On the other hand, for systems with limited power and bandwidth resources, decentralized solutions that only send a summary of local samples (we particularly focus on a one-bit communication protocol) to the fusion center is of great importance. To this end, we first consider a decentralized scheme where sensors send their one-bit quantized statistics every fixed period of time to the fusion center. We show that such a uniform sampling and quantization scheme is strictly suboptimal and its suboptimality can be quantified by the KL divergence of the distributions of the quantized statistics under both hypotheses. We then propose a decentralized GSPRT based on level-triggered sampling. That is, each sensor runs its own GSPRT repeatedly and reports its local decision to the fusion center asynchronously. We show that this scheme is asymptotically optimal as the local thresholds and global thresholds grow large at different rates. Lastly, two particular models and their associated applications are studied to compare the centralized and decentralized approaches. Numerical results are provided to demonstrate that the proposed level-triggered sampling based decentralized scheme aligns closely with the centralized scheme with substantially lower communication overhead, and significantly outperforms the uniform sampling and quantization based decentralized scheme.Comment: 39 page

    Sequential Hypothesis Test with Online Usage-Constrained Sensor Selection

    Full text link
    This work investigates the sequential hypothesis testing problem with online sensor selection and sensor usage constraints. That is, in a sensor network, the fusion center sequentially acquires samples by selecting one "most informative" sensor at each time until a reliable decision can be made. In particular, the sensor selection is carried out in the online fashion since it depends on all the previous samples at each time. Our goal is to develop the sequential test (i.e., stopping rule and decision function) and sensor selection strategy that minimize the expected sample size subject to the constraints on the error probabilities and sensor usages. To this end, we first recast the usage-constrained formulation into a Bayesian optimal stopping problem with different sampling costs for the usage-contrained sensors. The Bayesian problem is then studied under both finite- and infinite-horizon setups, based on which, the optimal solution to the original usage-constrained problem can be readily established. Moreover, by capitalizing on the structures of the optimal solution, a lower bound is obtained for the optimal expected sample size. In addition, we also propose algorithms to approximately evaluate the parameters in the optimal sequential test so that the sensor usage and error probability constraints are satisfied. Finally, numerical experiments are provided to illustrate the theoretical findings, and compare with the existing methods.Comment: 33 page

    Order-2 Asymptotic Optimality of the Fully Distributed Sequential Hypothesis Test

    Full text link
    This work analyzes the asymptotic performances of fully distributed sequential hypothesis testing procedures as the type-I and type-II error rates approach zero, in the context of a sensor network without a fusion center. In particular, the sensor network is defined by an undirected graph, where each sensor can observe samples over time, access the information from the adjacent sensors, and perform the sequential test based on its own decision statistic. Different from most literature, the sampling process and the information exchange process in our framework take place simultaneously (or, at least in comparable time-scales), thus cannot be decoupled from one another. Two message-passing schemes are considered, based on which the distributed sequential probability ratio test (DSPRT) is carried out respectively. The first scheme features the dissemination of the raw samples. Although the sample propagation based DSPRT is shown to yield the asymptotically optimal performance at each sensor, it incurs excessive inter-sensor communication overhead due to the exchange of raw samples with index information. The second scheme adopts the consensus algorithm, where the local decision statistic is exchanged between sensors instead of the raw samples, thus significantly lowering the communication requirement compared to the first scheme. In particular, the decision statistic for DSPRT at each sensor is updated by the weighted average of the decision statistics in the neighbourhood at every message-passing step. We show that, under certain regularity conditions, the consensus algorithm based DSPRT also yields the order-2 asymptotically optimal performance at all sensors.Comment: 36 page

    Asymptotically Optimal Stochastic Encryption for Quantized Sequential Detection in the Presence of Eavesdroppers

    Full text link
    We consider sequential detection based on quantized data in the presence of eavesdropper. Stochastic encryption is employed as a counter measure that flips the quantization bits at each sensor according to certain probabilities, and the flipping probabilities are only known to the legitimate fusion center (LFC) but not the eavesdropping fusion center (EFC). As a result, the LFC employs the optimal sequential probability ratio test (SPRT) for sequential detection whereas the EFC employs a mismatched SPRT (MSPRT). We characterize the asymptotic performance of the MSPRT in terms of the expected sample size as a function of the vanishing error probabilities. We show that when the detection error probabilities are set to be the same at the LFC and EFC, every symmetric stochastic encryption is ineffective in the sense that it leads to the same expected sample size at the LFC and EFC. Next, in the asymptotic regime of small detection error probabilities, we show that every stochastic encryption degrades the performance of the quantized sequential detection at the LFC by increasing the expected sample size, and the expected sample size required at the EFC is no fewer than that is required at the LFC. Then the optimal stochastic encryption is investigated in the sense of maximizing the difference between the expected sample sizes required at the EFC and LFC. Although this optimization problem is nonconvex, we show that if the acceptable tolerance of the increase in the expected sample size at the LFC induced by the stochastic encryption is small enough, then the globally optimal stochastic encryption can be analytically obtained; and moreover, the optimal scheme only flips one type of quantized bits (i.e., 1 or 0) and keeps the other type unchanged
    corecore