1,511,400 research outputs found

    Using a high fidelity CCGT simulator for building prognostic systems

    Get PDF
    Pressure to reduce maintenance costs in power utilities has resulted in growing interest in prognostic monitoring systems. Accurate prediction of the occurrence of faults and failures would result not only in improved system maintenance schedules but also in improved availability and system efficiency. The desire for such a system has driven research into the emerging field of prognostics for complex systems. At the same time there is a general move towards implementing high fidelity simulators of complex systems especially within the power generation field, with the nuclear power industry taking the lead. Whilst the simulators mainly function in a training capacity, the high fidelity of the simulations can also allow representative data to be gathered. Using simulators in this way enables systems and components to be damaged, run to failure and reset all without cost or danger to personnel as well as allowing fault scenarios to be run faster than real time. Consequently, this allows failure data to be gathered which is normally otherwise unavailable or limited, enabling analysis and research of fault progression in critical and high value systems. This paper presents a case study of utilising a high fidelity industrial Combined Cycle Gas Turbine (CCGT) simulator to generate fault data, and shows how this can be employed to build a prognostic system. Advantages and disadvantages of this approach are discussed

    Leveraging Ada 2012 and SPARK 2014 for assessing generated code from AADL models

    Get PDF
    Modeling of Distributed Real-time Embedded systems using Architecture Description Language provides the foundations for various levels of analysis: scheduling, reliability, consis- tency, etc.; but also allows for automatic code generation. A challenge is to demonstrate that generated code matches quality required for safety-critical systems. In the scope of the AADL, the Ocarina toolchain proposes code generation towards the Ada Ravenscar profile with restrictions for High- Integrity. It has been extensively used in the space domain as part of the TASTE project within the European Space Agency. In this paper, we illustrate how the combined use of Ada 2012 and SPARK 2014 significantly increases code quality and exhibits absence of run-time errors at both run-time and generated code levels

    Causality and Temporal Dependencies in the Design of Fault Management Systems

    Get PDF
    Reasoning about causes and effects naturally arises in the engineering of safety-critical systems. A classical example is Fault Tree Analysis, a deductive technique used for system safety assessment, whereby an undesired state is reduced to the set of its immediate causes. The design of fault management systems also requires reasoning on causality relationships. In particular, a fail-operational system needs to ensure timely detection and identification of faults, i.e. recognize the occurrence of run-time faults through their observable effects on the system. Even more complex scenarios arise when multiple faults are involved and may interact in subtle ways. In this work, we propose a formal approach to fault management for complex systems. We first introduce the notions of fault tree and minimal cut sets. We then present a formal framework for the specification and analysis of diagnosability, and for the design of fault detection and identification (FDI) components. Finally, we review recent advances in fault propagation analysis, based on the Timed Failure Propagation Graphs (TFPG) formalism.Comment: In Proceedings CREST 2017, arXiv:1710.0277

    Modelling probabilistic cache representativeness in the presence of arbitrary access patterns

    Get PDF
    Measurement-Based Probabilistic Timing Analysis (MBPTA) is a promising powerful industry-friendly method to derive worst-case execution time (WCET) estimates as needed for critical real-time embedded systems. MBPTA performs several (R) runs of the program on the target platform collecting the execution times in each run. MBPTA builds a probabilistic representativeness argument on whether those events with high impact on execution time, such as cache misses, arise on the runs made at analysis time so that their impact on execution time is captured. So far only events occurring in cache memories have been shown to challenge providing such representativeness argument. In this context, this paper introduces a representativeness validation method (RVS) to assess the probabilistic representativeness of MBPTA’s execution time observations in terms of cache behaviour. RVS resorts to cache simulation to predict worst-case miss scenarios that can appear during the deployment phase. RVS also constructs a probabilistic Worst-Case Miss Count curve based on the miss-counts captured in the R runs. If that curve upperbounds the impact of the predicted cache worst-case scenarios, R is deemed as a sufficient number of runs for which pWCET estimates can be reliably derived. Otherwise, the user is requested to perform more runs until all cache scenarios of interest are captured.Peer ReviewedPostprint (author's final draft

    Multiplexing Adaptive with Classic AUTOSAR? Adaptive Software Control to Increase Resource Utilization in Mixed-Critical Systems

    Get PDF
    International audienceAutomotive embedded systems need to cope with antagonist requirements: on the one hand, the users and market pressure push car manufacturers to integrate more and more services that go far beyond the control of the car itself. On the other hand, recent standardization efforts in the safety domain has led to the development of the ISO 26262 norm that defines means and requirements to ensure the safe operation of automotive embedded systems. In particular, it led to the definition of ASIL (Automotive Safety and Integrity Levels), i.e., it formally defines several criticality levels. Handling the increased complexity of new services makes new architectures, such as multi or many-cores, appealing choices for the car industry. Yet, these architectures provide a very low level of timing predictability due to shared resources, which goes in contradiction with timing guarantees required by ISO 26262. For highest criticality level tasks, Worst-Case Execution Time analysis (WCET) is required to guarantee that timing constraints are respected. The WCET analyzers consider the worst-case scenario: whenever a critical task accesses a shared resource in a multi/many-core platform, a WCET analyzer considers that all cores use the same resource concurrently. To improve the system performance, we proposed in a earlier work an approach where a critical task can be run in parallel with less critical tasks, as long as the real-time constraints are met. When no further interferences can be tolerated, the proposed run-time control suspends the low critical tasks until the termination of the critical task. In an automotive context, the approach can be translated as a highly critical partition, namely a classic AUTOSAR one, that runs on one dedicated core, with several cores running less critical Adaptive AUTOSAR application(s). We briefly describe the design of our proven-correct approach. Our strategy is based on a graph grammar to formally model the critical task as a set of control flow graphs on which a safe partial WCET analysis is applied and used at run-time to control the safe execution of the critical task

    NON-PARAMETRIC STATISTICAL APPROACH TO CORRECT SATELLITE RAINFALL DATA IN NEAR-REAL-TIME FOR RAIN BASED FLOOD NOWCASTING

    Get PDF
    Floods resulting from intense rainfall are one of the most disastrous hazards in many regions of the world since they contribute greatly to personal injury and to property damage mainly as a result of their ability to strike with little warning. The possibility to give an alert about a flooding situation at least a few hours before helps greatly to reduce the damage. Therefore, scores of flood forecasting systems have been developed during the past few years mainly at country level and regional level. Flood forecasting systems based only on traditional methods such as return period of flooding situations or extreme rainfall events have failed on most occasions to forecast flooding situations accurately because of changes on territory in recent years by extensive infrastructure development, increased frequency of extreme rainfall events over recent decades, etc. Nowadays, flood nowcasting systems or early warning systems which run on real- time precipitation data are becoming more popular as they give reliable forecasts compared to traditional flood forecasting systems. However, these kinds of systems are often limited to developed countries as they need well distributed gauging station networks or sophisticated surface-based radar systems to collect real-time precipitation data. In most of the developing countries and in some developed countries also, precipitation data from available sparse gauging stations are inadequate for developing representative aerial samples needed by such systems. As satellites are able to provide a global coverage with a continuous temporal availability, currently the possibility of using satellite-based rainfall estimates in flood nowcasting systems is being highly investigated. To contribute to the world's requirement for flood early warning systems, ITHACA developed a global scale flood nowcasting system that runs on near-real-time satellite rainfall estimates. The system was developed in cooperation with United Nations World Food Programme (WFP), to support the preparedness phase of the WFP like humanitarian assistance agencies, mainly in less developed countries. The concept behind this early warning system is identifying critical rainfall events for each hydrological basin on the earth with past rainfall data and using them to identify floodable rainfall events with real time rainfall data. The individuation of critical rainfall events was done with a hydrological analysis using 3B42 rainfall data which is the most accurate product of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) dataset. These critical events have been stored in a database and when a rainfall event is found in real-time which is similar or exceeds the event in the database an alert is issued for the basin area. The most accurate product of TMPA (3B42) is derived by applying bias adjustments to real time rainfall estimates using rain gauge data, thus it is available for end-users 10-15 days after each calendar month. The real time product of TMPA (3B42RT) is released approximately 9 hours after real-time and lacks of such kind of bias adjustments using rain gauge data as rain gauge data are not available in real time. Therefore, to have reliable alerts it is very important to reduce the uncertainty of 3B42RT product before using it in the early warning system. For this purpose, a statistical approach was proposed to make near real- time bias adjustments for the near real time product of TMPA (3B42RT). In this approach the relationship between the bias adjusted rainfall data product (3B42) and the real-time rainfall data product (3B42RT) was analyzed on the basis of drainage basins for the period from January 2003 to December 2007, and correction factors were developed for each basin worldwide to perform near real-time bias adjusted product estimation from the real-time rainfall data product (3B42RT). The accuracy of the product was analyzed by comparing with gauge rainfall data from Bangladesh and it was found that the uncertainty of the product is less even than the most accurate product of TMPA dataset (3B42

    Data-driven Adaptive Safety Monitoring using Virtual Subjects in Medical Cyber-Physical Systems: A Glucose Control Case Study

    Get PDF
    Medical cyber-physical systems (MCPS) integrate sensors, actuators, and software to improve patient safety and quality of healthcare. These systems introduce major challenges to safety analysis because the patient’s physiology is complex, nonlinear, unobservable, and uncertain. To cope with the challenge that unidentified physiological parameters may exhibit short-term variances in certain clinical scenarios, we propose a novel run-time predictive safety monitoring technique that leverages a maximal model coupled with online training of a computational virtual subject (CVS) set. The proposed monitor predicts safety-critical events at run-time using only clinically available measurements. We apply the technique to a surgical glucose control case study. Evaluation on retrospective real clinical data shows that the algorithm achieves 96% sensitivity with a low average false alarm rate of 0.5 false alarm per surgery
    • 

    corecore