3,283 research outputs found

    Changepoint Detection over Graphs with the Spectral Scan Statistic

    Full text link
    We consider the change-point detection problem of deciding, based on noisy measurements, whether an unknown signal over a given graph is constant or is instead piecewise constant over two connected induced subgraphs of relatively low cut size. We analyze the corresponding generalized likelihood ratio (GLR) statistics and relate it to the problem of finding a sparsest cut in a graph. We develop a tractable relaxation of the GLR statistic based on the combinatorial Laplacian of the graph, which we call the spectral scan statistic, and analyze its properties. We show how its performance as a testing procedure depends directly on the spectrum of the graph, and use this result to explicitly derive its asymptotic properties on few significant graph topologies. Finally, we demonstrate both theoretically and by simulations that the spectral scan statistic can outperform naive testing procedures based on edge thresholding and χ2\chi^2 testing

    Bayesian Test Design for Fault Detection and Isolation in Systems with Uncertainty

    Get PDF
    Methods for Fault Detection and Isolation (FDI) in systems with uncertainty have been studied extensively due to the increasing value and complexity of the maintenance and operation of modern Cyber-Physical Systems (CPS). CPS are characterized by nonlinearity, environmental and system uncertainty, fault complexity and highly non-linear fault propagation, which require advanced fault detection and isolation algorithms. Therefore, modern efforts develop active FDI (methods that require system reconfiguration) based on information theory to design tests rich in information for fault assessment. Information-based criteria for test design are often deployed as a Frequentist Optimal Experimental Design (FOED) problem, which utilizes the information matrix of the system. D- and Ds-optimality criteria for the information matrix have been used extensively in the literature since they usually calculate more robust test designs, which are less likely to be susceptible to uncertainty. However, FOED methods provide only locally informative tests, as they find optimal solutions around a neighborhood of an anticipated set of values for system uncertainty and fault severity. On the other hand, Bayesian Optimal Experimental Design (BOED) overcomes the issue of local optimality by exploring the entire parameter space of a system. BOED can, thus, provide robust test designs for active FDI. The literature on BOED for FDI is limited and mostly examines the case of normally distributed parameter priors. In some cases, such as in newly installed systems, a more generalized inference can be derived by using uniform distributions as parameter priors, when existing knowledge about the parameters is limited. In BOED, an optimal design can be found by maximizing an expected utility based on observed data. There is a plethora of utility functions, but the choice of utility function impacts the robustness of the solution and the computational cost of BOED. For instance, BOED that is based on the Fisher Information matrix can lead to an alphabetical criterion such as D- and Ds-optimality for the objective function of the BOED, but this also increases the computational cost for optimization since these criteria involve sensitivity analysis with the system model. On the other hand, when an observation-based method such as the Kullback-Leibler divergence from posterior to prior is used to make an inference on parameters, the expected utility calculations involve nested Monte Carlo calculations which, in turn, affect computation time. The challenge in these approaches is to find an adequate but relatively low Monte Carlo sampling rate, without introducing a significant bias on the result. Theory shows that for normally distributed parameter priors, the Kullback-Leibler divergence expected utility reduces to a Bayesian D-optimality. Similarly, Bayesian Ds-optimality can be used when the parameter priors are normally distributed. In this thesis, we prove the validity of the theory on a three-tank system using normally and uniformly distributed parameter priors to compare the Bayesian D-optimal design criterion and the Kullback-Leibler divergence expected utility. Nevertheless, there is no observation-based metric similar to Bayesian Ds-optimality when the parameter priors are not normally distributed. The main objective of this thesis is to derive an observation-based utility function similar to the Ds-optimality that can be used even when the requirement for normally distributed priors is not met. We begin our presentation with a formalistic comparison of FOED and BOED for different objective metrics. We focus on the impact different utility functions have on the optimal design and their computation time. The value of BOED is illustrated using a variation of the benchmark three-tank system as a case study. At the same time, we present the deterministic variance of the optimal design for different utility functions for this case study. The performance of the various utility functions of BOED and the corresponding FOED optimal designs are compared in terms of Hellinger distance. Hellinger distance is a bounded distribution metric between 0 and 1, where 0 indicates a complete overlap of the distributions and 1 indicates the absence of common points between the distributions. Analysis of the Hellinger distances calculated for the benchmark system shows that BOED designs can better separate the distributions of system measurements and, consequently, can classify the fault scenarios and the no-fault case with less uncertainty. When a uniform distribution is used as a parameter prior, the observation-based utility functions give better designs than FOED and Bayesian D-optimality, which use the Fisher information matrix. The observation-based method, similar to Ds-optimality, finds a better design than the observation-based method similar to D-optimality, but it is computationally more expensive. The computational cost can be lowered by reducing the Monte Carlo sampling, but, if the sampling rate is reduced significantly, an uneven solution plane is created affecting the FDI test design and assessment. Based on the results of this analysis, future research should focus on decreasing the computational cost without affecting the test design robustness

    Information and display requirements for independent landing monitors

    Get PDF
    The ways an Independent Landing Monitor (ILM) may be used to complement the automatic landing function were studied. In particular, a systematic procedure was devised to establish the information and display requirements of an ILM during the landing phase of the flight. Functionally, the ILM system is designed to aid the crew in assessing whether the total system (e.g., avionics, aircraft, ground navigation aids, external disturbances) performance is acceptable, and, in case of anomaly, to provide adequate information to the crew to select the least unsafe of the available alternatives. Economically, this concept raises the possibility of reducing the primary autoland system redundancy and associated equipment and maintenance costs. The required level of safety for the overall system would in these cases be maintained by upgrading the backup manual system capability via the ILM. A safety budget analysis was used to establish the reliability requirements for the ILM. These requirements were used as constraints in devising the fault detection scheme. Covariance propagation methods were used with a linearized system model to establish the time required to correct manually perturbed states due to the fault. Time-to-detect and time-to-correct requirements were combined to devise appropriate altitudes and strategies for fault recovery

    Parameter-Invariant Monitor Design for Cyber Physical Systems

    Get PDF
    The tight interaction between information technology and the physical world inherent in Cyber-Physical Systems (CPS) can challenge traditional approaches for monitoring safety and security. Data collected for robust CPS monitoring is often sparse and may lack rich training data describing critical events/attacks. Moreover, CPS often operate in diverse environments that can have significant inter/intra-system variability. Furthermore, CPS monitors that are not robust to data sparsity and inter/intra-system variability may result in inconsistent performance and may not be trusted for monitoring safety and security. Towards overcoming these challenges, this paper presents recent work on the design of parameter-invariant (PAIN) monitors for CPS. PAIN monitors are designed such that unknown events and system variability minimally affect the monitor performance. This work describes how PAIN designs can achieve a constant false alarm rate (CFAR) in the presence of data sparsity and intra/inter system variance in real-world CPS. To demonstrate the design of PAIN monitors for safety monitoring in CPS with different types of dynamics, we consider systems with networked dynamics, linear-time invariant dynamics, and hybrid dynamics that are discussed through case studies for building actuator fault detection, meal detection in type I diabetes, and detecting hypoxia caused by pulmonary shunts in infants. In all applications, the PAIN monitor is shown to have (significantly) less variance in monitoring performance and (often) outperforms other competing approaches in the literature. Finally, an initial application of PAIN monitoring for CPS security is presented along with challenges and research directions for future security monitoring deployments

    Quickest Change Detection in the Presence of a Nuisance Change

    Full text link
    In the quickest change detection problem in which both nuisance and critical changes may occur, the objective is to detect the critical change as quickly as possible without raising an alarm when either there is no change or a nuisance change has occurred. A window-limited sequential change detection procedure based on the generalized likelihood ratio test statistic is proposed. A recursive update scheme for the proposed test statistic is developed and is shown to be asymptotically optimal under mild technical conditions. In the scenario where the post-change distribution belongs to a parametrized family, a generalized stopping time and a lower bound on its average run length are derived. The proposed stopping rule is compared with the FMA stopping time and the naive 2-stage procedure that detects the nuisance or critical change using separate CuSum stopping procedures for the nuisance and critical changes. Simulations demonstrate that the proposed rule outperforms the FMA stopping time and the 2-stage procedure, and experiments on a real dataset on bearing failure verify the performance of the proposed stopping time

    Non-Stationary Process Monitoring for Change-Point Detection With Known Accuracy: Application to Wheels Coating Inspection

    Get PDF
    International audienceThis paper addresses the problem of monitoring online a non-stationary process to detect abrupt changes in the process mean value. Two main challenges are addressed: First, the monitored process is nonstationary; i.e., naturally changes over time and it is necessary to distinguish those “regular”process changes from abrupt changes resulting from potential failures. Second, this paper aims at being applied for industrial processes where the performance of the detection method must be accurately controlled. A novel sequential method, based on two fixed-length windows, is proposed to detect abrupt changes with guaranteed accuracy while dealing with non-stationary process. The first window is used for estimating the non-stationary process parameters, whereas the second window is used to execute the detection. A study on the performances of the proposed method provides analytical expressions of the test statistical properties. This allows to bound the false alarm probability for a given number of observations while maximizing the detection power as a function of a given detection delay. The proposed method is then applied for wheels coating monitoring using an imaging system. Numerical results on a large set of wheel images show the efficiency of the proposed approach and the sharpness of the theoretical study
    corecore