52 research outputs found

    Bayesian Test Design for Fault Detection and Isolation in Systems with Uncertainty

    Get PDF
    Methods for Fault Detection and Isolation (FDI) in systems with uncertainty have been studied extensively due to the increasing value and complexity of the maintenance and operation of modern Cyber-Physical Systems (CPS). CPS are characterized by nonlinearity, environmental and system uncertainty, fault complexity and highly non-linear fault propagation, which require advanced fault detection and isolation algorithms. Therefore, modern efforts develop active FDI (methods that require system reconfiguration) based on information theory to design tests rich in information for fault assessment. Information-based criteria for test design are often deployed as a Frequentist Optimal Experimental Design (FOED) problem, which utilizes the information matrix of the system. D- and Ds-optimality criteria for the information matrix have been used extensively in the literature since they usually calculate more robust test designs, which are less likely to be susceptible to uncertainty. However, FOED methods provide only locally informative tests, as they find optimal solutions around a neighborhood of an anticipated set of values for system uncertainty and fault severity. On the other hand, Bayesian Optimal Experimental Design (BOED) overcomes the issue of local optimality by exploring the entire parameter space of a system. BOED can, thus, provide robust test designs for active FDI. The literature on BOED for FDI is limited and mostly examines the case of normally distributed parameter priors. In some cases, such as in newly installed systems, a more generalized inference can be derived by using uniform distributions as parameter priors, when existing knowledge about the parameters is limited. In BOED, an optimal design can be found by maximizing an expected utility based on observed data. There is a plethora of utility functions, but the choice of utility function impacts the robustness of the solution and the computational cost of BOED. For instance, BOED that is based on the Fisher Information matrix can lead to an alphabetical criterion such as D- and Ds-optimality for the objective function of the BOED, but this also increases the computational cost for optimization since these criteria involve sensitivity analysis with the system model. On the other hand, when an observation-based method such as the Kullback-Leibler divergence from posterior to prior is used to make an inference on parameters, the expected utility calculations involve nested Monte Carlo calculations which, in turn, affect computation time. The challenge in these approaches is to find an adequate but relatively low Monte Carlo sampling rate, without introducing a significant bias on the result. Theory shows that for normally distributed parameter priors, the Kullback-Leibler divergence expected utility reduces to a Bayesian D-optimality. Similarly, Bayesian Ds-optimality can be used when the parameter priors are normally distributed. In this thesis, we prove the validity of the theory on a three-tank system using normally and uniformly distributed parameter priors to compare the Bayesian D-optimal design criterion and the Kullback-Leibler divergence expected utility. Nevertheless, there is no observation-based metric similar to Bayesian Ds-optimality when the parameter priors are not normally distributed. The main objective of this thesis is to derive an observation-based utility function similar to the Ds-optimality that can be used even when the requirement for normally distributed priors is not met. We begin our presentation with a formalistic comparison of FOED and BOED for different objective metrics. We focus on the impact different utility functions have on the optimal design and their computation time. The value of BOED is illustrated using a variation of the benchmark three-tank system as a case study. At the same time, we present the deterministic variance of the optimal design for different utility functions for this case study. The performance of the various utility functions of BOED and the corresponding FOED optimal designs are compared in terms of Hellinger distance. Hellinger distance is a bounded distribution metric between 0 and 1, where 0 indicates a complete overlap of the distributions and 1 indicates the absence of common points between the distributions. Analysis of the Hellinger distances calculated for the benchmark system shows that BOED designs can better separate the distributions of system measurements and, consequently, can classify the fault scenarios and the no-fault case with less uncertainty. When a uniform distribution is used as a parameter prior, the observation-based utility functions give better designs than FOED and Bayesian D-optimality, which use the Fisher information matrix. The observation-based method, similar to Ds-optimality, finds a better design than the observation-based method similar to D-optimality, but it is computationally more expensive. The computational cost can be lowered by reducing the Monte Carlo sampling, but, if the sampling rate is reduced significantly, an uneven solution plane is created affecting the FDI test design and assessment. Based on the results of this analysis, future research should focus on decreasing the computational cost without affecting the test design robustness

    Alternative evacuation procedures and smart devices' impact assessment for large passenger vessels under severe weather conditions

    Get PDF
    Within the expansive domain of maritime safety, optimizing evacuation procedures stands as a critical endeavour. After all, evacuation is literally the last and fundamental safety level afforded to mariners and passengers. Recent incidents have rekindled interest in assessing the performance of this ultimate safety barrier. However, addressing evacuability requires a holistic approach. The authors present herein the setup, simulation, and ultimately evaluation of a novel approach and its ability to rigorously assess multiple innovative risk-control options in a challenging, realistic setting. Moreover, its benchmarking against conventional regulation-dictated evacuation processes is captured distinctively along with the relative effectiveness of each proposed measure. Such measures include smart technologies and procedural changes that can result in substantial improvements to the current procedures. These will impact the ongoing discourse on maritime safety by providing insights for policymakers, vessel operators, emergency planners, etc., and emphasize the need for further research and development efforts to fortify the industry against evolving safety challenges

    Machine learning and case-based reasoning for real-time onboard prediction of the survivability of ships

    Get PDF
    The subject of damaged stability has greatly profited from the development of new tools and techniques in recent history. Specifically, the increased computational power and the probabilistic approach have transformed the subject, increasing accuracy and fidelity, hence allowing for a universal application and the inclusion of the most probable scenarios. Currently, all ships are evaluated for their stability and are expected to survive the dangers they will most likely face. However, further advancements in simulations have made it possible to further increase the fidelity and accuracy of simulated casualties. Multiple time domain and, to a lesser extent, Computational Fluid dynamins (CFD) solutions have been suggested as the next “evolutionary” step for damage stability. However, while those techniques are demonstrably more accurate, the computational power to utilize them for the task of probabilistic evaluation is not there yet. In this paper, the authors present a novel approach that aims to serve as a stopgap measure for introducing the time domain simulations in the existing framework. Specifically, the methodology presented serves the purpose of a fast decision support tool which is able to provide information regarding the ongoing casualty utilizing prior knowledge gained from simulations. This work was needed and developed for the purposes of the EU-funded project SafePASS

    SafePASS : a new chapter for passenger ship evacuation and marine emergency response

    Get PDF
    Despite the current high level of safety and the efforts to make passenger ships resilient to most fire and flooding scenarios, there are still gaps and challenges in the marine emergency response and ship evacuation processes. Those challenges arise from the fact that both processes are complex, multi-variable problems that rely on parameters involving not only people and technology but also procedural and managerial issues. SafePASS Project, funded under EU's Horizon 2020 Research and Innovation Programme, is set to radically redefine the evacuation processes by introducing new equipment, expanding the capabilities of legacy systems on-board, proposing new Life-Saving Appliances and ship layouts, and challenging the current international regulations, hence reducing the uncertainty, and increasing the efficiency in all the stages of ship evacuation and abandonment process

    SafePASS - Transforming marine accident response

    Get PDF
    The evacuation of a ship is the last line of defence against human loses in case of emergencies in extreme fire and flooding casualties. Since the establishment of the International Maritime Organisation (IMO), Maritime Safety is its cornerstone with the Safety of Life at Sea Convention (SOLAS) spearheading its relentless efforts to reduce risks to human life at sea. However, the times are changing. On one hand, we have the new opportunities created with the vast technological advances of today. On the other, we are facing new challenges, with the ever-increasing size of the passenger ships and the societal pressure for a continuous improvement of maritime safety. In this respect, the EU-funded Horizon 2020 Research and Innovation Programme project SafePASS, presented herein, aims to radically redefine the evacuation processes, the involved systems and equipment and challenge the international regulations for large passenger ships, in all environments, hazards and weather conditions, independently of the demographic factors. The project consortium, which brings together 15 European partners from industry, academia and classification societies. The SafePASS vision and plan for a safer, faster and smarter ship evacuation involves: i) a holistic and seamless approach to evacuation, addressing all states from alarm to rescue, including the design of the next generation of life-saving appliances and; ii) the integration of ‘smart’ technology and Augmented Reality (AR) applications to provide individual guidance to passengers, regardless of their demographic characteristics or hazard (flooding or fire), towards the optimal route of escape

    SafePASS Project : A Risk Modelling Tool for Passenger Ship Evacuation and Emergency Response Decision Support

    Get PDF
    One of the biggest challenges in the field of maritime safety is the integration of all the systems related to the evacuation and emergency response under one Decision Support Tool that could broadly cover all the emergency cases and assist in the co-ordination of the evacuation process. Besides, for a decision support tool to be useful we need to be able to calculate the Available time to Evacuate based on real-time data, such as the passenger distribution on board and of course based on the various sensor data that will monitor the damage and its propagation. For all the above, the risk modelling tool developed in SafePASS H2020 project is able to estimate the potential fatalities both in the design phase and in real-time, assessing the evacuation and abandonment risk dynamically, based on real-time data related to the passenger distribution, route, semantics, LSA availability, procedural changes, and damage case (fire or flooding) propagation

    The development and demonstration of an enhanced risk model for the evacuation process of large passenger vessels

    Get PDF
    Evacuating a large and complex environment, such as a large passenger vessel, either cruise or RoPax, is a safety-critical task that involves thousands of people in motion and a complex decision-making process. Despite the significant enhancement of maritime safety over the years, various hazards still pose threats to passengers and crew. To deal with this reality, the SafePASS project radically redefines the evacuation process by introducing novel technological solutions. In this context, this paper presents, in detail, an enhanced risk model for the ship evacuation process in order to facilitate the understanding of the actual risks of the process in fire and flooding accidents, and to assess various risk control measures and options toward risk mitigation. The risk model covers the entire event sequence in emergency cases on board, until the survival at sea phase, and it is constructed in two levels, following a combination of event tree analysis and Bayesian networks. Results show the risk corresponds to baseline scenarios for each accident case, which are also verified by relevant IMO and EMSA studies, and an example case of risk control option (RCO) is introduced to the model to demonstrate its ability to assess RCO’s efficiency in terms of risk reduction

    The development and demonstration of an enhanced risk model for the evacuation process of large passenger vessels

    Get PDF
    Evacuating a large and complex environment, such as a large passenger vessel, either cruise or RoPax, is a safety-critical task that involves thousands of people in motion and a complex decision-making process. Despite the significant enhancement of maritime safety over the years, various hazards still pose threats to passengers and crew. To deal with this reality, the SafePASS project radically redefines the evacuation process by introducing novel technological solutions. In this context, this paper presents, in detail, an enhanced risk model for the ship evacuation process in order to facilitate the understanding of the actual risks of the process in fire and flooding accidents, and to assess various risk control measures and options toward risk mitigation. The risk model covers the entire event sequence in emergency cases on board, until the survival at sea phase, and it is constructed in two levels, following a combination of event tree analysis and Bayesian networks. Results show the risk corresponds to baseline scenarios for each accident case, which are also verified by relevant IMO and EMSA studies, and an example case of risk control option (RCO) is introduced to the model to demonstrate its ability to assess RCO’s efficiency in terms of risk reduction

    The NAD-Booster Nicotinamide Riboside Potently Stimulates Hematopoiesis through Increased Mitochondrial Clearance

    Get PDF
    It has been recently shown that increased oxidative phosphorylation, as reflected by increased mitochondrial activity, together with impairment of the mitochondrial stress response, can severely compromise hematopoietic stem cell (HSC) regeneration. Here we show that the NAD(+)-boosting agent nicotinamide riboside (NR) reduces mitochondrial activity within HSCs through increased mitochondrial clearance, leading to increased asymmetric HSC divisions. NR dietary supplementation results in a significantly enlarged pool of progenitors, without concurrent HSC exhaustion, improves survival by 80%, and accelerates blood recovery after murine lethal irradiation and limiting-HSC transplantation. In immune-deficient mice, NR increased the production of human leucocytes from hCD34+ progenitors. Our work demonstrates for the first time a positive effect of NAD(+)-boosting strategies on the most primitive blood stem cells, establishing a link between HSC mitochondrial stress, mitophagy, and stem-cell fate decision, and unveiling the potential of NR to improve recovery of patients suffering from hematological failure including post chemo- and radiotherapy.Peer reviewe

    Proving the Effectiveness of the Fundamentals of Robotic Surgery (FRS) Skills Curriculum: A Single-blinded, Multispecialty, Multi-institutional Randomized Control Trial

    Get PDF
    Objective: To demonstrate the noninferiority of the fundamentals of robotic surgery (FRS) skills curriculum over current training paradigms and identify an ideal training platform. Summary Background Data: There is currently no validated, uniformly accepted curriculum for training in robotic surgery skills. Methods: Single-blinded parallel-group randomized trial at 12 international American College of Surgeons (ACS) Accredited Education Institutes (AEI). Thirty-three robotic surgery experts and 123 inexperienced surgical trainees were enrolled between April 2015 and November 2016. Benchmarks (proficiency levels) on the 7 FRS Dome tasks were established based on expert performance. Participants were then randomly assigned to 4 training groups: Dome (n = 29), dV-Trainer (n = 30), and DVSS (n = 32) that trained to benchmarks and control (n = 32) that trained using locally available robotic skills curricula. The primary outcome was participant performance after training based on task errors and duration on 5 basic robotic tasks (knot tying, continuous suturing, cutting, dissection, and vessel coagulation) using an avian tissue model (transfer-test). Secondary outcomes included cognitive test scores, GEARS ratings, and robot familiarity checklist scores. Results: All groups demonstrated significant performance improvement after skills training (P < 0.01). Participating residents and fellows performed tasks faster (DOME and DVSS groups) and with fewer errors than controls (DOME group; P < 0.01). Inter-rater reliability was high for the checklist scores (0.82–0.97) but moderate for GEARS ratings (0.40–0.67). Conclusions: We provide evidence of effectiveness for the FRS curriculum by demonstrating better performance of those trained following FRS compared with controls on a transfer test. We therefore argue for its implementation across training programs before surgeons apply these skills clinically
    corecore