3,402 research outputs found

    Integration of a failure monitoring within a hybrid dynamic simulation environment

    Get PDF
    The complexity and the size of the industrial chemical processes induce the monitoring of a growing number of process variables. Their knowledge is generally based on the measurements of system variables and on the physico-chemical models of the process. Nevertheless this information is imprecise because of process and measurement noise. So the research ways aim at developing new and more powerful techniques for the detection of process fault. In this work, we present a method for the fault detection based on the comparison between the real system and the reference model evolution generated by the extended Kalman filter. The reference model is simulated by the dynamic hybrid simulator, PrODHyS. It is a general object-oriented environment which provides common and reusable components designed for the development and the management of dynamic simulation of industrial systems. The use of this method is illustrated through a didactic example relating to the field of Chemical Process System Engineering

    Supervisory Control of Fuzzy Discrete Event Systems

    Full text link
    In order to cope with situations in which a plant's dynamics are not precisely known, we consider the problem of supervisory control for a class of discrete event systems modelled by fuzzy automata. The behavior of such discrete event systems is described by fuzzy languages; the supervisors are event feedback and can disable only controllable events with any degree. The concept of discrete event system controllability is thus extended by incorporating fuzziness. In this new sense, we present a necessary and sufficient condition for a fuzzy language to be controllable. We also study the supremal controllable fuzzy sublanguage and the infimal controllable fuzzy superlanguage when a given pre-specified desired fuzzy language is uncontrollable. Our framework generalizes that of Ramadge-Wonham and reduces to Ramadge-Wonham framework when membership grades in all fuzzy languages must be either 0 or 1. The theoretical development is accompanied by illustrative numerical examples.Comment: 12 pages, 2 figure

    Challenging the Sustainable Social Progress

    Get PDF
    The Presentation held at the ceremony for granting the title of Doctor Honoris Causa of The West University in Timisoara is based on a paper which presents a synthetic view on some economic and financial issues of our times, considering especially the crisis of the last period, and mentioning a few remarkable ideas about the mentality of humankind. The author takes into consideration in a large view, regarding economy, some aspects of physics and of other fields of modern science, such as philosophy, including irreversibility of time, suitable for our full of uncertainty and complex reality, wishing to achieve goals related to solving the economic issues and to social progress.

    Scheduling of offshore wind farm installation using simulated annealing

    Get PDF
    This paper focuses on the scheduling problem in the offshore wind farm installation process, which is strongly influenced by the offshore weather condition. Due to the nature of the offshore weather condition, i.e., partially predictable and uncontrollable, it is urgent to find a way to schedule the offshore installation process effectively and economically. For this purpose, this work presents a model based on Timed Petri Nets (TPN) approach for the offshore installation process and applies simulated annealing algorithm to find the optimal schedule

    Modeling and verification of Functional and Non-Functional Requirements of ambient Self-Adaptive Systems

    Get PDF
    International audienceSelf-Adaptive Systems modify their behavior at run-time in response to changing environmental conditions. For these systems, Non-Functional Requirements play an important role, and one has to identify as early as possible the requirements that are adaptable. We propose an integrated approach for modeling and verify- ing the requirements of Self-Adaptive Systems using Model Driven Engineering techniques. For this, we use Relax, which is a Requirements Engineering language which introduces flexibility in Non-Functional Require- ments. We then use the concepts of Goal-Oriented Requirements Engineering for eliciting and modeling the requirements of Self-Adaptive Systems. For properties verification, we use OMEGA2/IFx profile and toolset. We illustrate our proposed approach by applying it on an academic case study

    Entwicklung eines auf Fuzzy-Regeln basierten Expertensystems zur Hochwasservorhersage im mesoskaligen Einzugsgebiet des Oberen Mains

    Get PDF
    People worldwide are faced with flood events of different magnitudes. A timely and reliable flood forecast is essential for the people to save goods and, more important, lives. The development of a fuzzy rule based flood forecast system considering extreme flood events within meso-scale catchments and with return periods of 100 years and more is the main objective of this work. Considering one river catchment extreme flood events are usually seldom. However, these data are essential for a reliable setup of warning systems. In this work the database is extended by simulations of possible flood events performing the hydrological model WaSiM-ETH (Water balance Simulation model ETH) driven by generated precipitation fields. The therefore required calibration of the hydrological model is performed applying the genetic optimization algorithm SCE (Shuffled Complex Evolution). Thereby, different SCE configuration setups are investigated and an optimization strategy for the Upper Main basin is developed in order to ensure reliable und satisfying calibration results. In this thesis the developed forecast system comprises different time horizons (3 days; 6, 12, and 48 hours) in order to ensure a reliable and continuous flood forecast at the three main gauges of the Upper Main river. Thereby, the focus of the different fuzzy inference systems lies on different discharge conditions, which together ensure a continuous flood forecast. In this work the performance of the two classical fuzzy inference systems, Mamdani and Takagi-Sugeno, is investigated considering all four forecast horizons. Thereby, a wide variety of different input features, among others Tukey data depth, is taken into consideration. For the training of the fuzzy inference systems the SA (Simulated Annealing) optimization algorithm is applied. A further performance comparison is carried out considering the 48 hour forecast behaviour of the two fuzzy inference systems and the hydrological model WaSiM-ETH. In this work the expert system ExpHo-HORIX is developed in order to combine the single, trained fuzzy inference systems to one overall flood warning system. This expert system ensures beside the fast forecast a quantification of uncertainties within a manageable, user-friendly, and transparent framework which can be easily implemented into an exiting environment.Menschen weltweit werden mit Hochwasserereignissen unterschiedlicher Stärke konfrontiert. Um Eigentum und, noch viel wichtiger, Leben zu retten, ist eine rechtzeitige und zuverlässige Hochwasserwarnung und folglich -vorhersage unerlässlich. Ziel dieser Arbeit ist es deshalb, ein auf Fuzzy-Regeln basiertes Hochwasserwarnsystem für mesoskalige Einzugsgebiete und die Vorhersage von extremen Hochwasserereignissen mit Wiederkehrperioden von 100 Jahren und mehr unter Berücksichtigung von Unsicherheiten zu entwickeln. Da extreme Hochwasserereignisse mit einer Jährlichkeit von 100 oder mehr Jahren in der Realität nicht in jedem Einzugsgebiet bereits beobachtet und aufgezeichnet wurden, ist eine Erweiterung der Datenbank auf Grund von Modellsimulationen zwingend notwendig. In dieser Arbeit werden hierzu das hydrologische Modell WaSiM-ETH (Wasserhaushalts-Simulations-Modell ETH) sowie von Bliefernicht et al. (2008) generierte Niederschlagsfelder verwendet. Die Kalibrierung des Modells erfolgt mit dem SCE (Shuffled Complex Evolution) Optimierungsalgorithmus. Um reproduzierbare Kalibrierungsergebnisse zu erzielen und die notwendige Kalibrierungszeit möglichst gering zu halten, werden unterschiedliche Optimierungskonfigurationen untersucht und eine Kalibrierungsstrategie für das mesoskalige Einzugsgebiet des Oberen Mains entwickelt. Um eine kontinuierliche und zuverlässige Vorhersage zu garantieren, ist die Idee entwickelt worden, Fuzzy-Regelsysteme für unterschiedliche Vorhersagehorizonte (3 Tage; 6, 12 und 48 Stunden) für die drei Hauptpegel des Oberen Mains aufzustellen, die im Zusammenspiel eine kontinuierliche Vorhersage sicher stellen. Der Fokus der 3-Tagesvorhersage liegt hierbei in der zuverlässigen Wiedergabe von geringen und mittleren Abflussbedingungen sowie der zuverlässigen und rechtzeitigen Vorhersage von Überschreitungen einer vordefinierten Meldestufe. Eine vorhergesagte Überschreitung der Meldestufe führt zu einem Wechsel der Vorhersagesysteme von der 3-Tages- zu der 6-, 12- und 48-Stundenvorhersage, deren Fokus auf der Vorhersage der Hochwasserganglinie liegt. In diesem Zusammenhang wird die Effizienz der beiden klassischen Regelsysteme,Mamdani und Takagi-Sugeno, sowie die Kombination unterschiedlicher Eingangsgrößen, unter anderem Tukey Tiefenfunktion, näher untersucht. Ein weiterer Effizienzvergleich wird zwischen den Mamdani Regelsystemen der 48-Stundenvorhersage und dem hydrologischen ModellWaSiM-ETH durchgeführt. Für das Training der beiden Regelsysteme wird der SA (Simulated Annealing) Optimierungsalgorithmus verwendet. Die einzelnen Fuzzy-Regelsysteme werden schließlich in dem entwickelten Hochwasserwarnsystem ExpHo-HORIX (Expertensystem Hochwasser - HORIX) zusammengefügt. Standardmäßig wird für jede Vorhersage die Niederschlagsunsicherheit auf Grund von Ensemble-Vorhersagen innerhalb ExpHo-HORIX analysiert und ausgewiesen. Im Hochwasserfall können für die stündlichen Fuzzy-RegelsystemeModellunsicherheiten des hydrologischenModells, das für die Generierung der Datenbank von Extremereignissen verwendet wurde, zusätzlich ausgewiesen werden. Hierzu müssen zusätzlich Ergebnisse der SCEM Analyse (Grundmann, 2009) vorliegen

    Contribution to the evaluation and optimization of passengers' screening at airports

    Get PDF
    Security threats have emerged in the past decades as a more and more critical issue for Air Transportation which has been one of the main ressource for globalization of economy. Reinforced control measures based on pluridisciplinary research and new technologies have been implemented at airports as a reaction to different terrorist attacks. From the scientific perspective, the efficient screening of passengers at airports remain a challenge and the main objective of this thesis is to open new lines of research in this field by developing advanced approaches using the resources of Computer Science. First this thesis introduces the main concepts and definitions of airport security and gives an overview of the passenger terminal control systems and more specifically the screening inspection positions are identified and described. A logical model of the departure control system for passengers at an airport is proposed. This model is transcribed into a graphical view (Controlled Satisfiability Graph-CSG) which allows to test the screening system with different attack scenarios. Then a probabilistic approach for the evaluation of the control system of passenger flows at departure is developped leading to the introduction of Bayesian Colored Petri nets (BCPN). Finally an optimization approach is adopted to organize the flow of passengers at departure as best as possible given the probabilistic performance of the elements composing the control system. After the establishment of a global evaluation model based on an undifferentiated serial processing of passengers, is analyzed a two-stage control structure which highlights the interest of pre-filtering and organizing the passengers into separate groups. The conclusion of this study points out for the continuation of this theme

    Integration Techniques of Fault Detection and Isolation Using Interval Observers

    Get PDF
    An interval observer has been illustrated to be a suitable approach to detect and isolate faults affecting complex dynamical industrial systems. Concerning fault detection, interval observation is an appropriate passive robust strategy to generate an adaptive threshold to be used in residual evaluation when model uncertainty is located in parameters (interval model). In such approach, the observer gain is a key parameter since it determines the time evolution of the residual sensitivity to a fault and the minimum detectable fault. This thesis illustrates that the whole fault detection process is ruled by the dynamics of the fault residual sensitivity functions and by the time evolution of the adaptive threshold related to the interval observer. Besides, it must be taken into account that these two observer fault detection properties depend on the used observer gain. As a consequence, the observer gain becomes a tuning parameter which allows enhancing the observer fault detection performance while avoiding some drawbacks related to the analytical models, as the wrapping effect. In this thesis, the effect of the observer gain on fault detection and how this parameter can avoid some observer drawbacks (i.e. wrapping effect) are deeply analyzed. One of the results of this analysis is the determination of the minimum detectable fault function related to a given fault type. This function allows introducing a fault classification according to the fault detectability time evolution: permanently (strongly) detected, non-permanently (weakly) detected or just non-detected. In this fault detection part of this thesis, two examples have been used to illustrate the derived results: a mineral grinding-classification process and an industrial servo actuator. Concerning the interface between fault detection and fault isolation, this thesis shows that both modules can not be considered separately since the fault detection process has an important influence on the fault isolation result. This influence is not only due to the time evolution of the fault signals generated by the fault detection module but also to the fact that the fault residual sensitivity functions determines the faults which are affecting a given fault signal and the dynamics of this fault signal for each fault. This thesis illustrates this point suggesting that the interface between fault detection and fault isolation must consider a set of fault signals properties: binary property, sign property, fault residual sensitivity property, occurrence order property and occurrence time instant property. Moreover, as a result of the influence of the observer gain on the fault detection stage and on the fault residual sensitivity functions, this thesis demonstrates that the observer gain has also a key role in the fault isolation module which might allow enhancing its performance when this parameter is tuned properly (i.e. fault distinguishability may be increased). As a last point, this thesis analyzes the timed discrete-event nature of the fault signals generated by the fault detection module. As a consequence, it suggests using timed discrete-event models to model the fault isolation module. This thesis illustrates that this kind of models allow enhancing the fault isolation result. Moreover, as the monitored system is modelled using an interval observer, this thesis shows as this qualitative fault isolation model can be built up on the grounds of this system analytical model. Finally, the proposed fault isolation method is applied to detect and isolate faults of the Barcelona’s urban sewer system limnimeters. Keywords: Fault Detection, Fault Diagnosis, Robustness, Observers, Intervals, Discrete-event Systems.En la presente tesis se demuestra que el uso de observadores intervalares para detectar y aislar fallos en sistemas dinámicos complejos constituye una estrategia apropiada. En la etapa de detección del fallo, dicha estrategia permite determinar el umbral adaptativo usado en la evaluación del residuo (robustez pasiva). Dicha metodología, responde a la consideración de modelos con parámetros inciertos (modelos intervalares). En dicho enfoque, la ganancia del observador es un parámetro clave que permite determinar la evolución temporal de la sensibilidad del residuo a un fallo y el mínimo fallo detectable para un tipo de fallo determinado. Esta tesis establece que todo el proceso de detección de fallos viene determinado por la dinámica de las funciones sensibilidad del residuo a los diferentes fallos considerados y por la evolución temporal del umbral adaptativo asociado al observador intervalar. Además, se debe tener en cuenta que estas dos propiedades del observador respecto la detección de fallos dependen de la ganancia del observador. En consecuencia, la ganancia del observador se convierte en el parámetro de diseño que permite mejorar las prestaciones de dicho modelo respecto la detección de fallos mientras que permite evitar algunos defectos asociados al uso de modelos intervalares, como el efecto wrapping. Uno de los resultados obtenidos es la determinación de la función fallo mínimo detectable para un tipo de fallo dado. Esta función permite introducir una clasificación de los fallos en función de la evolución temporal de su detectabilidad: fallos permanentemente detectados, fallos no permanentemente detectados y fallos no detectados. En la primera parte de la tesis centrada en la detección de fallos se utilizan dos ejemplos para ilustrar los resultados obtenidos: un proceso de trituración y separación de minerales y un servoactuador industrial. Respecto a la interfaz entre la etapa de detección de fallos y el proceso de aislamiento, esta tesis muestra que ambos módulos no pueden considerarse separadamente dado que el proceso de detección tiene una importante influencia en el resultado de la etapa de aislamiento. Esta influencia no es debida sólo a la evolución temporal de las señales de fallo generados por el módulo de detección sino también porque las funciones sensibilidad del residuo a los diferentes posibles fallos determinan los fallos que afectan a un determinado señal de fallo y la dinámica de éste para cada uno de los fallos. Esta tesis ilustra este punto sugiriendo que el interfaz entre detección y aislamiento del fallo debe considerar un conjunto de propiedades de dichos señales: propiedad binaria, propiedad del signo, propiedad de la sensibilidad del residuo a un fallo dado, propiedad del orden de aparición de las señales causados por los fallos y la propiedad del tiempo de aparición de estos. Además, como resultado de la influencia de la ganancia del observador en la etapa de detección y en las funciones sensibilidad asociadas a los residuos, esta tesis ilustra que la ganancia del observador tiene también un papel crucial en el módulo de aislamiento, el cual podría permitir mejorar el comportamiento de dicho módulo diseñando éste parámetro del observador de forma adecuada (Ej. Incrementar la distinción de los fallos para su mejor aislamiento). Como último punto, esta tesis analiza la naturaleza temporal de eventos discretos asociada a las señales de fallo generados por el módulo de detección. A consecuencia, se sugiere usar modelos de eventos discretos temporales para modelizar el módulo de aislamiento del fallo. Esta tesis muestra que este tipo de modelos permite mejorar el resultado de aislamiento del fallo. Además, dado que el sistema monitorizado es modelado usando un observador intervalar, esta tesis muestra como este modelo cualitativo de aislamiento puede ser construido usando dicho modelo analítico del sistema. Finalmente, el método propuesto de aislamiento del fallo es aplicado para detectar y aislar fallos en los limnimetros del sistema de alcantarillado de Barcelona. Palabras clave: Detección de Fallos, Diagnosis de Fallos, Robusteza, Observadores, Intervalos, Sistemas de Eventos Discretos
    corecore