86,706 research outputs found

    A bayesian approach to adaptive detection in nonhomogeneous environments

    Get PDF
    We consider the adaptive detection of a signal of interest embedded in colored noise, when the environment is nonhomogeneous, i.e., when the training samples used for adaptation do not share the same covariance matrix as the vector under test. A Bayesian framework is proposed where the covariance matrices of the primary and the secondary data are assumed to be random, with some appropriate joint distribution. The prior distributions of these matrices require a rough knowledge about the environment. This provides a flexible, yet simple, knowledge-aided model where the degree of nonhomogeneity can be tuned through some scalar variables. Within this framework, an approximate generalized likelihood ratio test is formulated. Accordingly, two Bayesian versions of the adaptive matched filter are presented, where the conventional maximum likelihood estimate of the primary data covariance matrix is replaced either by its minimum mean-square error estimate or by its maximum a posteriori estimate. Two detectors require generating samples distributed according to the joint posterior distribution of primary and secondary data covariance matrices. This is achieved through the use of a Gibbs sampling strategy. Numerical simulations illustrate the performances of these detectors, and compare them with those of the conventional adaptive matched filter

    Tuning Windowed Chi-Squared Detectors for Sensor Attacks

    Full text link
    A model-based windowed chi-squared procedure is proposed for identifying falsified sensor measurements. We employ the widely-used static chi-squared and the dynamic cumulative sum (CUSUM) fault/attack detection procedures as benchmarks to compare the performance of the windowed chi-squared detector. In particular, we characterize the state degradation that a class of attacks can induce to the system while enforcing that the detectors do not raise alarms (zero-alarm attacks). We quantify the advantage of using dynamic detectors (windowed chi-squared and CUSUM detectors), which leverages the history of the state, over a static detector (chi-squared) which uses a single measurement at a time. Simulations using a chemical reactor are presented to illustrate the performance of our tools

    Detection of multiplicative noise in stationary random processes using second- and higher order statistics

    Get PDF
    This paper addresses the problem of detecting the presence of colored multiplicative noise, when the information process can be modeled as a parametric ARMA process. For the case of zero-mean multiplicative noise, a cumulant based suboptimal detector is studied. This detector tests the nullity of a specific cumulant slice. A second detector is developed when the multiplicative noise is nonzero mean. This detector consists of filtering the data by an estimated AR filter. Cumulants of the residual data are then shown to be well suited to the detection problem. Theoretical expressions for the asymptotic probability of detection are given. Simulation-derived finite-sample ROC curves are shown for different sets of model parameters

    SourcererCC: Scaling Code Clone Detection to Big Code

    Full text link
    Despite a decade of active research, there is a marked lack in clone detectors that scale to very large repositories of source code, in particular for detecting near-miss clones where significant editing activities may take place in the cloned code. We present SourcererCC, a token-based clone detector that targets three clone types, and exploits an index to achieve scalability to large inter-project repositories using a standard workstation. SourcererCC uses an optimized inverted-index to quickly query the potential clones of a given code block. Filtering heuristics based on token ordering are used to significantly reduce the size of the index, the number of code-block comparisons needed to detect the clones, as well as the number of required token-comparisons needed to judge a potential clone. We evaluate the scalability, execution time, recall and precision of SourcererCC, and compare it to four publicly available and state-of-the-art tools. To measure recall, we use two recent benchmarks, (1) a large benchmark of real clones, BigCloneBench, and (2) a Mutation/Injection-based framework of thousands of fine-grained artificial clones. We find SourcererCC has both high recall and precision, and is able to scale to a large inter-project repository (250MLOC) using a standard workstation.Comment: Accepted for publication at ICSE'16 (preprint, unrevised
    • 

    corecore