1,616 research outputs found

    VERIFICATION AND APPLICATION OF DETECTABILITY BASED ON PETRI NETS

    Get PDF
    In many real-world systems, due to limitations of sensors or constraints of the environment, the system dynamics is usually not perfectly known. However, the state information of the system is usually crucial for the purpose of decision making. The state of the system needs to be determined in many applications. Due to its importance, the state estimation problem has received considerable attention in the discrete event system (DES) community. Recently, the state estimation problem has been studied systematically in the framework of detectability. The detectability properties characterize the possibility to determine the current and the subsequent states of a system after the observation of a finite number of events generated by the system. To model and analyze practical systems, powerful DES models are needed to describe the different observation behaviors of the system. Secondly, due to the state explosion problem, analysis methods that rely on exhaustively enumerating all possible states are not applicable for practical systems. It is necessary to develop more efficient and achievable verification methods for detectability. Furthermore, in this thesis, efficient detectability verification methods using Petri nets are investigated, then detectability is extended to a more general definition (C-detectability) that only requires that a given set of crucial states can be distinguished from other states. Formal definitions and efficient verification methods for C-detectability properties are proposed. Finally, C-detectability is applied to the railway signal system to verify the feasibility of this property: 1. Four types of detectability are extended from finite automata to labeled Petri nets. In particular, strong detectability, weak detectability, periodically strong detectability, and periodically weak detectability are formally defined in labeled Petri nets. 2. Based on the notion of basis reachability graph (BRG), a practically efficient approach (the BRG-observer method) to verify the four detectability properties in bounded labeled Petri nets is proposed. Using basis markings, there is no need to enumerate all the markings that are consistent with an observation. It has been shown by other researchers that the size of the BRG is usually much smaller than the size of the reachability graph (RG). Thus, the method improves the analysis efficiency and avoids the state space explosion problem. 3. Three novel approaches for the verification of the strong detectability and periodically strong detectability are proposed, which use three different structures whose construction has a polynomial complexity. Moreover, rather than computing all cycles of the structure at hand, which is NP-hard, it is shown that strong detectability can be verified looking at the strongly connected components whose computation also has a polynomial complexity. As a result, they have lower computational complexity than other methods in the literature. 4. Detectability could be too restrictive in real applications. Thus, detectability is extended to C-detectability that only requires that a given set of crucial states can be distinguished from other states. Four types of C-detectability are defined in the framework of labeled Petri nets. Moreover, efficient approaches are proposed to verify such properties in the case of bounded labeled Petri net systems based on the BRG. 5. Finally, a general modeling framework of railway systems is presented for the states estimation using labeled Petri nets. Then, C-detectability is applied to railway signal systems to verify its feasibility in the real-world system. Taking the RBC handover procedure in the Chinese train control system level 3 (CTCS-3) as an example, the RBC handover procedure is modeled using labeled Petri nets. Then based on the proposed approaches, it is shown that that the RBC handover procedure satisfies strongly C-detectability

    Verification and Enforcement of Strong State-Based Opacity for Discrete-Event Systems

    Full text link
    In this paper, we investigate the verification and enforcement of strong state-based opacity (SBO) in discrete-event systems modeled as partially-observed (nondeterministic) finite-state automata, including strong K-step opacity (K-SSO), strong current-state opacity (SCSO), strong initial-state opacity (SISO), and strong infinite-step opacity (Inf-SSO). They are stronger versions of four widely-studied standard opacity notions, respectively. We firstly propose a new notion of K-SSO, and then we construct a concurrent-composition structure that is a variant of our previously-proposed one to verify it. Based on this structure, a verification algorithm for the proposed notion of K-SSO is designed. Also, an upper bound on K in the proposed K-SSO is derived. Secondly, we propose a distinctive opacity-enforcement mechanism that has better scalability than the existing ones (such as supervisory control). The basic philosophy of this new mechanism is choosing a subset of controllable transitions to disable before an original system starts to run in order to cut off all its runs that violate a notion of strong SBO of interest. Accordingly, the algorithms for enforcing the above-mentioned four notions of strong SBO are designed using the proposed two concurrent-composition structures. In particular, the designed algorithm for enforcing Inf-SSO has lower time complexity than the existing one in the literature, and does not depend on any assumption. Finally, we illustrate the applications of the designed algorithms using examples.Comment: 30 pages, 20 figures, partial results in Section 3 were presented at IEEE Conference on Decision and Control, 2022. arXiv admin note: text overlap with arXiv:2204.0469

    Sensor selection for fine-grained behavior verification that respects privacy

    Full text link
    A useful capability is that of classifying some agent's behavior using data from a sequence, or trace, of sensor measurements. The sensor selection problem involves choosing a subset of available sensors to ensure that, when generated, observation traces will contain enough information to determine whether the agent's activities match some pattern. In generalizing prior work, this paper studies a formulation in which multiple behavioral itineraries may be supplied, with sensors selected to distinguish between behaviors. This allows one to pose fine grained questions, e.g., to position the agent's activity on a spectrum. In addition, with multiple itineraries, one can also ask about choices of sensors where some behavior is always plausibly concealed by (or mistaken for, or conflated with) another. Using sensor ambiguity to limit the acquisition of knowledge is a strong privacy guarantee, and one which some earlier work has examined. By concretely formulating privacy requirements for sensor selection, this paper connects both lines of work: privacy -- where there is a bound from above, and behavior verification -- where sensors are bounded from below. We examine the worst case computational complexity that results from both types of bounds, proving that upper bounds are more challenging under standard computational complexity assumptions. The problem is intractable in general, but we give a novel approach to solving this problem that can exploit interrelationships between constraints, and we see opportunities for a few optimizations. Case studies are presented to demonstrate the usefulness and scalability of our proposed solution, and to assess the impact of the optimizations

    Multiobjective performance-based designs in fault estimation and isolation for discrete-time systems and its application to wind turbines

    Get PDF
    In this work, we develop a performance-based design of model-based observes and statistical-based decision mechanisms for achieving fault estimation and fault isolation in systems affected by unknown inputs and stochastic noises. First, through semidefinite programming, we design the observers considering different estimation performance indices as the covariance of the estimation errors, the fault tracking delays and the degree of decoupling from unknown inputs and from faults in other channels. Second, we perform a co-design of the observers and decision mechanisms for satisfying certain trade-off between different isolation performance indices: the false isolation rates, the isolation times and the minimum size of the isolable faults. Finally, we extend these results to a scheme based on a bank of observers for the case where multiple faults affect the system and isolability conditions are not verified. To show the effectiveness of the results, we apply these design strategies to a well-known benchmark of wind turbines which considers multiple faults and has explicit requirements over isolation times and false isolation rates

    Incorporating accurate statistical modeling in PET: reconstruction for whole-body imaging

    Get PDF
    Tese de doutoramento em Biofísica, apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2007The thesis is devoted to image reconstruction in 3D whole-body PET imaging. OSEM ( Ordered Subsets Expectation maximization ) is a statistical algorithm that assumes Poisson data. However, corrections for physical effects (attenuation, scattered and random coincidences) and detector efficiency remove the Poisson characteristics of these data. The Fourier Rebinning (FORE), that combines 3D imaging with fast 2D reconstructions, requires corrected data. Thus, if it will be used or whenever data are corrected prior to OSEM, the need to restore the Poisson-like characteristics is present. Restoring Poisson-like data, i.e., making the variance equal to the mean, was achieved through the use of weighted OSEM algorithms. One of them is the NECOSEM, relying on the NEC weighting transformation. The distinctive feature of this algorithm is the NEC multiplicative factor, defined as the ratio between the mean and the variance. With real clinical data this is critical, since there is only one value collected for each bin the data value itself. For simulated data, if we keep track of the values for these two statistical moments, the exact values for the NEC weights can be calculated. We have compared the performance of five different weighted algorithms (FORE+AWOSEM, FORE+NECOSEM, ANWOSEM3D, SPOSEM3D and NECOSEM3D) on the basis of tumor detectablity. The comparison was done for simulated and clinical data. In the former case an analytical simulator was used. This is the ideal situation, since all the weighting factors can be exactly determined. For comparing the performance of the algorithms, we used the Non-Prewhitening Matched Filter (NPWMF) numerical observer. With some knowledge obtained from the simulation study we proceeded to the reconstruction of clinical data. In that case, it was necessary to devise a strategy for estimating the NEC weighting factors. The comparison between reconstructed images was done by a physician largely familiar with whole-body PET imaging

    Fast and Unbiased Estimation of Volume Under Ordered Three-Class ROC Surface (VUS) With Continuous or Discrete Measurements

    Get PDF
    Receiver Operating Characteristic (ROC) surfaces have been studied in the literature essentially during the last decade and are considered as a natural generalization of ROC curves in three-class problems. The volume under the surface (VUS) is useful for evaluating the performance of a trichotomous diagnostic system or a three-class classifier's overall accuracy when the possible disease condition or sample belongs to one of three ordered categories. In the areas of medical studies and machine learning, the VUS of a new statistical model is typically estimated through a sample of ordinal and continuous measurements obtained by some suitable specimens. However, discrete scales of the prediction are also frequently encountered in practice. To deal with such scenario, in this paper, we proposed a unified and efficient algorithm of linearithmic order, based on dynamic programming, for unbiased estimation of the mean and variance of VUS with unidimensional samples drawn from continuous or non-continuous distributions. Monte Carlo simulations verify our theoretical findings and developed algorithms

    Integrated application of compositional and behavioural safety analysis

    Get PDF
    To address challenges arising in the safety assessment of critical engineering systems, research has recently focused on automating the synthesis of predictive models of system failure from design representations. In one approach, known as compositional safety analysis, system failure models such as fault trees and Failure Modes and Effects Analyses (FMEAs) are constructed from component failure models using a process of composition. Another approach has looked into automating system safety analysis via application of formal verification techniques such as model checking on behavioural models of the system represented as state automata. So far, compositional safety analysis and formal verification have been developed separately and seen as two competing paradigms to the problem of model-based safety analysis. This thesis shows that it is possible to move forward the terms of this debate and use the two paradigms synergistically in the context of an advanced safety assessment process. The thesis develops a systematic approach in which compositional safety analysis provides the basis for the systematic construction and refinement of state-automata that record the transition of a system from normal to degraded and failed states. These state automata can be further enhanced and then be model-checked to verify the satisfaction of safety properties. Note that the development of such models in current practice is ad hoc and relies only on expert knowledge, but it being rationalised and systematised in the proposed approach – a key contribution of this thesis. Overall the approach combines the advantages of compositional safety analysis such as simplicity, efficiency and scalability, with the benefits of formal verification such as the ability for automated verification of safety requirements on dynamic models of the system, and leads to an improved model-based safety analysis process. In the context of this process, a novel generic mechanism is also proposed for modelling the detectability of errors which typically arise as a result of component faults and then propagate through the architecture. This mechanism is used to derive analyses that can aid decisions on appropriate detection and recovery mechanisms in the system model. The thesis starts with an investigation of the potential for useful integration of compositional and formal safety analysis techniques. The approach is then developed in detail and guidelines for analysis and refinement of system models are given. Finally, the process is evaluated in three cases studies that were iteratively performed on increasingly refined and improved models of aircraft and automotive braking and cruise control systems. In the light of the results of these studies, the thesis concludes that integration of compositional and formal safety analysis techniques is feasible and potentially useful in the design of safety critical systems
    • …
    corecore