1,436 research outputs found
DESIGN OF OPTIMAL PROCEDURAL CONTROLLERS FOR CHEMICAL PROCESSES MODELLED AS STOCHASTIC DISCRETE EVENT SYSTEMS
This thesis presents a formal method for the the design of optimal and provably correct
procedural controllers for chemical processes modelled as Stochastic Discrete Event Systems
(SDESs). The thesis extends previous work on Procedural Control Theory (PCT) [1],
which used formal techniques for the design of automation Discrete Event Systems (DESs).
Many dynamic processes for example, batch operations and the start-up and shut down of
continuous plants, can be modelled as DESs. Controllers for these systems are typically
of the sequential type.
Most prior work on characterizing the behaviour of DESs has been restricted to deterministic
systems. However, DESs consisting of concurrent interacting processes present
a broad spectrum of uncertainty such as uncertainty in the occurrence of events. The
formalism of weighted probabilistic Finite State Machine (wp-FSM) is introduced for
modelling SDESs and pre-de ned failure models are embedded in wp-FSM to describe
and control the abnormal behaviour of systems. The thesis presents e cient algorithms
and procedures for synthesising optimal procedural controllers for such SDESs.
The synthesised optimal controllers for such stochastic systems will take into consideration
probabilities of events occurrence, operation costs and failure costs of events in
making optimal choices in the design of control sequences. The controllers will force the
system from an initial state to one or more goal states with an optimal expected cost and
when feasible drive the system from any state reached after a failure to goal states.
On the practical side, recognising the importance of the needs of the target end
user, the design of a suitable software implementation is completed. The potential of both
the approach and the supporting software are demonstrated by two industry case studies.
Furthermore, the simulation environment gPROMS was used to test whether the operating
speci cations thus designed were met in a combined discrete/continuous environment
IMPROVE - Innovative Modelling Approaches for Production Systems to Raise Validatable Efficiency
This open access work presents selected results from the European research and innovation project IMPROVE which yielded novel data-based solutions to enhance machine reliability and efficiency in the fields of simulation and optimization, condition monitoring, alarm management, and quality prediction
Modelling, safety verification and design of discrete/continuous processing systems.
Imperial Users onl
Batch Control and Diagnosis
Batch processes are becoming more and more important in the chemical process industry, where they are used in the manufacture of specialty materials, which often are highly profitable. Some examples where batch processes are important are the manufacturing of pharmaceuticals, polymers, and semiconductors. The focus of this thesis is exception handling and fault detection in batch control. In the first part an internal model approach for exception handling is proposed where each equipment object in the control system is extended with a state-machine based model that is used on-line to structure and implement the safety interlock logic. The thesis treats exception handling both at the unit supervision level and at the recipe level. The goal is to provide a structure, which makes the implementation of exception handling in batch processes easier. The exception handling approach has been implemented in JGrafchart and tested on the batch pilot plant Procel at Universitat Politècnica de Catalunya in Barcelona, Spain. The second part of the thesis is focused on fault detection in batch processes. A process fault can be any kind of malfunction in a dynamic system or plant, which leads to unacceptable performance such as personnel injuries or bad product quality. Fault detection in dynamic processes is a large area of research where several different categories of methods exist, e.g., model-based and process history-based methods. The finite duration and non-linear behavior of batch processes where the variables change significantly over time and the quality variables are only measured at the end of the batch lead to that the monitoring of batch processes is quite different from the monitoring of continuous processes. A benchmark batch process simulation model is used for comparison of several fault detection methods. A survey of multivariate statistical methods for batch process monitoring is performed and new algorithms for two of the methods are developed. It is also shown that by combining model-based estimation and multivariate methods fault detection can be improved even though the process is not fully observable
Computer-aided HAZOP of batch processes
The modern batch chemical processing plants have a tendency of increasing
technological complexity and flexibility which make it difficult to control the
occurrence of accidents. Social and legal pressures have increased the demands
for verifying the safety of chemical plants during their design and operation.
Complete identification and accurate assessment of the hazard potential in the
early design stages is therefore very important so that preventative or protective
measures can be integrated into future design without adversely affecting
processing and control complexity or capital and operational costs. Hazard and
Operability Study (HAZOP) is a method of systematically identifying every
conceivable process deviation, its abnormal causes and adverse hazardous
consequences in the chemical plants. [Continues.
Modélisation formelle des systèmes de détection d'intrusions
L’écosystème de la cybersécurité évolue en permanence en termes du nombre, de la diversité, et de la complexité des attaques. De ce fait, les outils de détection deviennent inefficaces face à certaines attaques. On distingue généralement trois types de systèmes de détection d’intrusions : détection par anomalies, détection par signatures et détection hybride. La détection par anomalies est fondée sur la caractérisation du comportement habituel du système, typiquement de manière statistique. Elle permet de détecter des attaques connues ou inconnues, mais génère aussi un très grand nombre de faux positifs. La détection par signatures permet de détecter des attaques connues en définissant des règles qui décrivent le comportement connu d’un attaquant. Cela demande une bonne connaissance du comportement de l’attaquant. La détection hybride repose sur plusieurs méthodes de détection incluant celles sus-citées. Elle présente l’avantage d’être plus précise pendant la détection. Des outils tels que Snort et Zeek offrent des langages de bas niveau pour l’expression de règles de reconnaissance d’attaques. Le nombre d’attaques potentielles étant très grand, ces bases de règles deviennent rapidement difficiles à gérer et à maintenir. De plus, l’expression de règles avec état dit stateful est particulièrement ardue pour reconnaître une séquence d’événements. Dans cette thèse, nous proposons une approche stateful basée sur les diagrammes d’état-transition algébriques (ASTDs) afin d’identifier des attaques complexes. Les ASTDs permettent de représenter de façon graphique et modulaire une spécification, ce qui facilite la maintenance et la compréhension des règles. Nous étendons la notation ASTD avec de nouvelles fonctionnalités pour représenter des attaques complexes. Ensuite, nous spécifions plusieurs attaques avec la notation étendue et exécutons les spécifications obtenues sur des flots d’événements à l’aide d’un interpréteur pour identifier des attaques. Nous évaluons aussi les performances de l’interpréteur avec des outils industriels tels que Snort et Zeek. Puis, nous réalisons un compilateur afin de générer du code exécutable à partir d’une spécification ASTD, capable d’identifier de façon efficiente les séquences d’événements.Abstract : The cybersecurity ecosystem continuously evolves with the number, the diversity,
and the complexity of cyber attacks. Generally, we have three types of Intrusion
Detection System (IDS) : anomaly-based detection, signature-based detection, and
hybrid detection. Anomaly detection is based on the usual behavior description of
the system, typically in a static manner. It enables detecting known or unknown attacks
but also generating a large number of false positives. Signature based detection
enables detecting known attacks by defining rules that describe known attacker’s behavior.
It needs a good knowledge of attacker behavior. Hybrid detection relies on
several detection methods including the previous ones. It has the advantage of being
more precise during detection. Tools like Snort and Zeek offer low level languages to
represent rules for detecting attacks. The number of potential attacks being large,
these rule bases become quickly hard to manage and maintain. Moreover, the representation
of stateful rules to recognize a sequence of events is particularly arduous. In this thesis, we propose a stateful approach based on algebraic state-transition
diagrams (ASTDs) to identify complex attacks. ASTDs allow a graphical and modular
representation of a specification, that facilitates maintenance and understanding of
rules. We extend the ASTD notation with new features to represent complex attacks.
Next, we specify several attacks with the extended notation and run the resulting specifications
on event streams using an interpreter to identify attacks. We also evaluate
the performance of the interpreter with industrial tools such as Snort and Zeek. Then,
we build a compiler in order to generate executable code from an ASTD specification,
able to efficiently identify sequences of events
State Management for Efficient Event Pattern Detection
Event Stream Processing (ESP) Systeme überwachen kontinuierliche Datenströme, um benutzerdefinierte Queries auszuwerten. Die Herausforderung besteht darin, dass die Queryverarbeitung zustandsbehaftet ist und die Anzahl von Teilübereinstimmungen mit der Größe der verarbeiteten Events exponentiell anwächst.
Die Dynamik von Streams und die Notwendigkeit, entfernte Daten zu integrieren, erschweren die Zustandsverwaltung. Erstens liefern heterogene Eventquellen Streams mit unvorhersehbaren Eingaberaten und Queryselektivitäten. Während Spitzenzeiten ist eine erschöpfende Verarbeitung unmöglich, und die Systeme müssen auf eine Best-Effort-Verarbeitung zurückgreifen. Zweitens erfordern Queries möglicherweise externe Daten, um ein bestimmtes Event für eine Query auszuwählen. Solche Abhängigkeiten sind problematisch: Das Abrufen der Daten unterbricht die Stream-Verarbeitung. Ohne eine Eventauswahl auf Grundlage externer Daten wird das Wachstum von Teilübereinstimmungen verstärkt.
In dieser Dissertation stelle ich Strategien für optimiertes Zustandsmanagement von ESP Systemen vor. Zuerst ermögliche ich eine Best-Effort-Verarbeitung mittels Load Shedding. Dabei werden sowohl Eingabeeevents als auch Teilübereinstimmungen systematisch verworfen, um eine Latenzschwelle mit minimalem Qualitätsverlust zu garantieren. Zweitens integriere ich externe Daten, indem ich das Abrufen dieser von der Verwendung in der Queryverarbeitung entkoppele. Mit einem effizienten Caching-Mechanismus vermeide ich Unterbrechungen durch Übertragungslatenzen. Dazu werden externe Daten basierend auf ihrer erwarteten Verwendung vorab abgerufen und mittels Lazy Evaluation bei der Eventauswahl berücksichtigt. Dabei wird ein Kostenmodell verwendet, um zu bestimmen, wann welche externen Daten abgerufen und wie lange sie im Cache aufbewahrt werden sollen. Ich habe die Effektivität und Effizienz der vorgeschlagenen Strategien anhand von synthetischen und realen Daten ausgewertet und unter Beweis gestellt.Event stream processing systems continuously evaluate queries over event streams to detect user-specified patterns with low latency. However, the challenge is that query processing is stateful and it maintains partial matches that grow exponentially in the size of processed events.
State management is complicated by the dynamicity of streams and the need to integrate remote data. First, heterogeneous event sources yield dynamic streams with unpredictable input rates, data distributions, and query selectivities. During peak times, exhaustive processing is unreasonable, and systems shall resort to best-effort processing. Second, queries may require remote data to select a specific event for a pattern. Such dependencies are problematic: Fetching the remote data interrupts the stream processing. Yet, without event selection based on remote data, the growth of partial matches is amplified.
In this dissertation, I present strategies for optimised state management in event pattern detection. First, I enable best-effort processing with load shedding that discards both input events and partial matches. I carefully select the shedding elements to satisfy a latency bound while striving for a minimal loss in result quality. Second, to efficiently integrate remote data, I decouple the fetching of remote data from its use in query evaluation by a caching mechanism. To this end, I hide the transmission latency by prefetching remote data based on anticipated use and by lazy evaluation that postpones the event selection based on remote data to avoid interruptions. A cost model is used to determine when to fetch which remote data items and how long to keep them in the cache.
I evaluated the above techniques with queries over synthetic and real-world data. I show that the load shedding technique significantly improves the recall of pattern detection over baseline approaches, while the technique for remote data integration significantly reduces the pattern detection latency
Recommended from our members
Modeling and Analyzing Systemic Risk in Complex Sociotechnical Systems The Role of Teleology, Feedback, and Emergence
Recent systemic failures such as the BP Deepwater Horizon Oil Spill, Global Financial Crisis, and Northeast Blackout have reminded us, once again, of the fragility of complex sociotechnical systems. Although the failures occurred in very different domains and were triggered by different events, there are, however, certain common underlying mechanisms of abnormalities driving these systemic failures. Understanding these mechanisms is essential to avoid such disasters in the future. Moreover, these disasters happened in sociotechnical systems, where both social and technical elements can interact with each other and with the environment. The nonlinear interactions among these components can lead to an “emergent” behavior – i.e., the behavior of the whole is more than the sum of its parts – that can be difficult to anticipate and control. Abnormalities can propagate through the systems to cause systemic failures. To ensure the safe operation and production of such complex systems, we need to understand and model the associated systemic risk.
Traditional emphasis of chemical engineering risk modeling is on the technical components of a chemical plant, such as equipment and processes. However, a chemical plant is more than a set of equipment and processes, with the human elements playing a critical role in decision-making. Industrial statistics show that about 70% of the accidents are caused by human errors. So, new modeling techniques that go beyond the classical equipment/process-oriented approaches to include the human elements (i.e., the “socio” part of the sociotechnical systems) are needed for analyzing systemic risk of complex sociotechnical systems. This thesis presents such an approach.
This thesis presents a new knowledge modeling paradigm for systemic risk analysis that goes beyond chemical plants by unifying different perspectives. First, we develop a unifying teleological, control theoretic framework to model decision-making knowledge in a complex system. The framework allows us to identify systematically the common failure mechanisms behind systemic failures in different domains. We show how cause-and-effect knowledge can be incorporated into this framework by using signed directed graphs. We also develop an ontology-driven knowledge modeling component and show how this can support decision-making by using a case study in public health emergency. This is the first such attempt to develop an ontology for public health documents. Lastly, from a control-theoretic perspective, we address the question, “how do simple individual components of a system interact to produce a system behavior that cannot be explained by the behavior of just the individual components alone?” Through this effort, we attempt to bridge the knowledge gap between control theory and complexity science
- …