71 research outputs found

    Time Decomposition for Diagnosis of Discrete Event Systems

    Get PDF
    Artificial intelligence diagnosis is a research topic of knowledge representation and reasoning. This work addresses the problem of on-line model-based diagnosis of Discrete Event Systems (DES). A DES model represents state dynamics in a discrete manner. This work concentrates on the models whose scales are finite, and thus uses finite state machines as the DES representation. Given a flow of observable events generated by a DES model, diagnosis aims at deciding whether a system is running normally or is experiencing faulty behaviours. The main challenge is to deal with the complexity of a diagnosis problem, which has to monitor an observation flow on the fly, and generate a succession of the states that the system is possibly in, called belief state. Previous work in the literature has proposed exact diagnosis, which means that a diagnostic algorithm attempts to compute a belief state at any time that is consistent with the observation flow from the time when the system starts operating to the current time. The main drawback of such a conservative strategy is the inability to follow the observation flow for a large system because the size of each belief state has been proved to be exponential in the number of system states. Furthermore, the temporal complexity to handle the exact belief states remains a problem. Because diagnosis of DES is a hard problem, the use of faster diagnostic algorithms that do not perform an exact diagnosis is often inevitable. However, those algorithms may not be as precise as an exact model-based diagnostic algorithm to diagnose a diagnosable system. This Thesis has four contributions. First, Chapter 3 proposes the concept of simulation to verify the precision of an imprecise diagnostic algorithm w.r.t. a diagnosable DES model. A simulation is a finite state machine that represents how a diagnostic algorithm works for a particular DES model. Second, Chapter 4 proposes diagnosis using time decomposition, and studies window-based diagnostic algorithms, called Independent-Window Algorithms (IWAs). IWAs only diagnose on the very last events of the observation flow, and forget about the past. The precision of this approach is assessed by constructing a simulation. Third, Chapter 5 proposes a compromise between the two extreme strategies of exact diagnosis and IWAs. This work looks for the minimum piece of information to remember from the past so that a window-based algorithm ensures the same precision as using the exact diagnosis. Chapter 5 proposes Time-Window Algorithms (TWAs), which are extensions to IWAs. TWAs carry over some information about the current state of the system from one time window to the next. The precision is verified by constructing a simulation. Fourth, Chapter 6 evaluates IWAs and TWAs through experiments, and compares their performance with the exact diagnosis encoded by Binary Decision Diagrams (BDD). Chapter 6 also examines the impact of the time window selections on the performance of IWAs and TWAs

    An improved mixture of probabilistic PCA for nonlinear data-driven process monitoring

    Get PDF
    An improved mixture of probabilistic principal component analysis (PPCA) has been introduced for nonlinear data-driven process monitoring in this paper. To realize this purpose, the technique of a mixture of probabilistic principal component analyzers is utilized to establish the model of the underlying nonlinear process with local PPCA models, where a novel composite monitoring statistic is proposed based on the integration of two monitoring statistics in modified PPCA-based fault detection approach. Besides, the weighted mean of the monitoring statistics aforementioned is utilized as a metrics to detect potential abnormalities. The virtues of the proposed algorithm are discussed in comparison with several unsupervised algorithms. Finally, Tennessee Eastman process and an autosuspension model are employed to demonstrate the effectiveness of the proposed scheme further

    Quick Subset Construction

    Get PDF
    A finite automaton can be either deterministic (DFA) or nondeterministic (NFA). An automaton-based task is in general more efficient when performed with a DFA rather than an NFA. For any NFA there is an equivalent DFA that can be generated by the classical Subset Construction algorithm. When, however, a large NFA may be transformed into an equivalent DFA by a series of actions operating directly on the NFA, Subset Construction may be unnecessarily expensive in computation, as a (possibly large) deterministic portion of the NFA is regenerated as is, a waste of processing. This is why a conservative algorithm for NFA determinization is proposed, called Quick Subset Construction, which progressively transforms an NFA into an equivalent DFA instead of generating the DFA from scratch, thereby avoiding unnecessary processing. Quick Subset Construction is proven, both formally and empirically, to be equivalent to Subset Construction, inasmuch it generates exactly the same DFA. Experimental results indicate that, the smaller the number of repair actions performed on the NFA, as compared to the size of the equivalent DFA, the faster Quick Subset Construction over Subset Construction

    Data fusion for system modeling, performance assessment and improvement

    Get PDF
    Due to rapid advancements in sensing and computation technology, multiple types of sensors have been embedded in various applications, on-line automatically collecting massive production information. Although this data-rich environment provides great opportunity for more effective process control, it also raises new research challenges on data analysis and decision making due to the complex data structures, such as heterogeneous data dependency, and large-volume and high-dimensional characteristics. This thesis contributes to the area of System Informatics and Control (SIAC) to develop systematic data fusion methodologies for effective quality control and performance improvement in complex systems. These advanced methodologies enable (1) a better handling of the rich data environment communicated by complex engineering systems, (2) a closer monitoring of the system status, and (3) a more accurate forecasting of future trends and behaviors. The research bridges the gaps in methodologies among advanced statistics, engineering domain knowledge and operation research. It also forms close linkage to various application areas such as manufacturing, health care, energy and service systems. This thesis started from investigating the optimal sensor system design and conducting multiple sensor data fusion analysis for process monitoring and diagnosis in different applications. In Chapter 2, we first studied the couplings or interactions between the optimal design of a sensor system in a Bayesian Network and quality management of a manufacturing system, which can improve cost-effectiveness and production yield by considering sensor cost, process change detection speed, and fault diagnosis accuracy in an integrated manner. An algorithm named “Best Allocation Subsets by Intelligent Search” (BASIS) with optimality proof is developed to obtain the optimal sensor allocation design at minimum cost under different user specified detection requirements. Chapter 3 extended this line of research by proposing a novel adaptive sensor allocation framework, which can greatly improve the monitoring and diagnosis capabilities of the previous method. A max-min criterion is developed to manage sensor reallocation and process change detection in an integrated manner. The methodology was tested and validated based on a hot forming process and a cap alignment process. Next in Chapter 4, we proposed a Scalable-Robust-Efficient Adaptive (SERA) sensor allocation strategy for online high-dimensional process monitoring in a general network. A monitoring scheme of using the sum of top-r local detection statistics is developed, which is scalable, effective and robust in detecting a wide range of possible shifts in all directions. This research provides a generic guideline for practitioners on determining (1) the appropriate sensor layout; (2) the “ON” and “OFF” states of different sensors; and (3) which part of the acquired data should be transmitted to and analyzed at the fusion center, when only limited resources are available. To improve the accuracy of remaining lifetime prediction, Chapter 5 proposed a data-level fusion methodology for degradation modeling and prognostics. When multiple sensors are available to measure the degradation mechanism of a same system, it becomes a high dimensional and challenging problem to determine which sensors to use and how to combine them together for better data analysis. To address this issue, we first defined two essential properties if present in a degradation signal, can enhance the effectiveness for prognostics. Then, we proposed a generic data-level fusion algorithm to construct a composite health index to achieve those two identified properties. The methodology was tested using the degradation signals of aircraft gas turbine engine, which demonstrated a much better prognostic result compared to relying solely on the data from an individual sensor. In summary, this thesis is the research drawing attention to the area of data fusion for effective employment of the underlying data gathering capabilities for system modeling, performance assessment and improvement. The fundamental data fusion methodologies are developed and further applied to various applications, which can facilitate resources planning, real-time monitoring, diagnosis and prognostics.Ph.D

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Towards a new methodology for design, modelling, and verification of reconfigurable distributed control systems based on a new extension to the IEC 61499 standard

    Get PDF
    In order to meet user requirements and system environment changes, reconfigurable control systems must dynamically adapt their structure and behaviour without disrupting system operation. IEC 61499 standard provides limited support for the design and verification of such systems. In fact, handling different reconfiguration scenarios at runtime is difficult since function blocks in IEC 61499 cannot be changed at run-time. Hence, this thesis promotes an IEC 61499 extension called reconfigurable function block (RFB) that increases design readability and smoothly switches to the most appropriate behaviour when a reconfiguration event occurs. To ensure system feasibility after reconfiguration, in addition to the qualitative verification, quantitative verification based on probabilistic model checking is addressed in a new RFBA approach. The latter aims to transform the designed RFB model automatically into a generalised reconfigurable timed net condition/event system model (GRTNCES) using a newly developed environment called RFBTool. The GR-TNCES fits well with RFB and preserves its semantic. Using the probabilistic model checker PRISM, the generated GR-TNCES model is checked using defined properties specified in computation tree logic. As a result, an evaluation of system performance and an estimation of reconfiguration risks are obtained. The RFBA methodology is applied on a distributed power system case study.Dynamische Anforderungen und Umgebungen erfordern rekonfigurierbare Anlagen und Steuerungssysteme. Rekonfiguration ermöglicht es einem System, seine Struktur und sein Verhalten an interne oder externe Änderungen anzupassen. Die Norm IEC 61499 wurde entwickelt, um (verteilte) Steuerungssysteme auf Basis von Funktionsbausteinen zu entwickeln. Sie bietet jedoch wenig Unterstützung für Entwurf und Verifikation. Die Tatsache, dass eine Rekonfiguration das System-Ausführungsmodell verändert, erschwert die Entwicklung in IEC 61499 zusätzlich. Daher schlägt diese Dissertation rekonfigurierbare Funktionsbausteine (RFBs) als Erweiterung der Norm vor. Ein RFB verarbeitet über einen Master-Slave-Automaten Rekonfigurationsereignisse und löst das entsprechende Verhalten aus. Diese Hierarchie trennt das Rekonfigurationsmodell vom Steuerungsmodell und vereinfacht so den Entwurf. Die Funktionalität des Entwurfs muss verifiziert werden, damit die Ausführbarkeit des Systems nach einer Rekonfiguration gewährleistet ist. Hierzu wird das entworfene RFB-Modell automatisch in ein generalised reconfigurable timed net condition/event system übersetzt. Dieses wird mit dem Model-Checker PRISM auf qualitative und quantitative Eigenschaften überprüft. Somit wird eine Bewertung der Systemperformanz und eine Einschätzung der Rekonfigurationsrisiken erreicht. Die RFB-Methodik wurde in einem Softwarewerkzeug umgesetzt und in einer Fallstudie auf ein dezentrales Stromnetz angewendet
    • …
    corecore