1,654 research outputs found

    Use of evidential reasoning for eliciting bayesian subjective probabilities in human reliability analysis: A maritime case

    Get PDF
    Modelling the interdependencies among the factors influencing human error (e.g. the common performance conditions (CPCs) in Cognitive Reliability Error Analysis Method (CREAM)) stimulates the use of Bayesian Networks (BNs) in Human Reliability Analysis (HRA). However, subjective probability elicitation for a BN is often a daunting and complex task. To create conditional probability values for each given variable in a BN requires a high degree of knowledge and engineering effort, often from a group of domain experts. This paper presents a novel hybrid approach for incorporating the evidential reasoning (ER) approach with BNs to facilitate HRA under incomplete data. The kernel of this approach is to develop the best and the worst possible conditional subjective probabilities of the nodes representing the factors influencing HRA when using BNs in human error probability (HEP). The proposed hybrid approach is demonstrated by using CREAM to estimate HEP in the maritime area. The findings from the hybrid ER-BN model can effectively facilitate HEP analysis in specific and decision-making under uncertainty in general

    Development of an Efficient Planned Maintenance Framework for Marine and Offshore Machinery Operating under Highly Uncertain Environment

    Get PDF
    The constantly increasing complexity of marine and offshore machinery is a consequence of a constant improvement in ship powering, automation, specialisation in cargo transport, new ship types, as well as an effort to make the sea transport more economic. Therefore, the criteria of reliability, availability and maintainability have become very important factors in the process of marine machinery design, operation and maintenance. An important finding from the literature exposed that failure to marine machinery can cause both direct and indirect economic damage with a long-term financial consequence. Notably, many cases of machinery failures reported in databases were as a result of near misses and incidents which are potential accident indicators. Moreover, experience has shown that modelling of past accident events and scenarios can provide insights into how a machinery failure can be subsisted even if it is not avoidable, also a basis for risk analysis of the machinery in order to reveal its vulnerabilities. This research investigates the following modelling approach in order to improve the efficiency of marine and offshore machinery operating under highly uncertain environment. Firstly, this study makes full use of evidential reasoning’s advantage to propose a novel fuzzy evidential reasoning sensitivity analysis method (FER-SAM) to facilitate the assessment of operational uncertainties (trend analysis, family analysis, environmental analysis, design analysis, and human reliability analysis) in ship cranes. Secondly, a fuzzy rule based sensitivity analysis methodology is proposed as a maintenance prediction model for oil-wetted gearbox and bearing with emphasis on ship cranes by formulating a fuzzy logic box (diagnostic table), which provides the ship crane operators with a means to predict possible impending failure without having to dismantle the crane. Thirdly, experience has shown that it is not financially possible to employ all the suggested maintenance strategies in the literature. Thus, this study proposed a fuzzy TOPSIS approach that can help the maintenance engineers to select appropriate strategies aimed at enhancing the performance of the marine and offshore machinery. Finally, the developed models are integrated in order to facilitate a generic planned maintenance framework for robust improvement and management, especially in situations where conventional planned maintenance techniques cannot be implemented with confidence due to data deficiency

    Combination of Evidence in Dempster-Shafer Theory

    Full text link

    Risk Assessment and Management of Petroleum Transportation Systems Operations

    Get PDF
    Petroleum Transportation Systems (PTSs) have a significant impact on the flow of crude oil within a Petroleum Supply Chain (PSC), due to the great demand on this natural product. Such systems are used for safe movement of crude and/or refined products from starting points (i.e. production sites or storage tanks), to their final destinations, via land or sea transportation. PTSs are vulnerable to several risks because they often operate in a dynamic environment. Due to this environment, many potential risks and uncertainties are involved. Not only having a direct effect on the product flow within PSC, PTSs accidents could also have severe consequences for the humans, businesses, and the environment. Therefore, safe operations of the key systems such as port, ship and pipeline, are vital for the success of PTSs. This research introduces an advanced approach to ensure safety of PTSs. This research proposes multiple network analysis, risk assessment, uncertainties treatment and decision making techniques for dealing with potential hazards and operational issues that are happening within the marine ports, ships, or pipeline transportation segments within one complete system. The main phases of the developed framework are formulated in six steps. In the first phase of the research, the hazards in PTSs operations that can lead to a crude oil spill are identified through conducting an extensive review of literature and experts’ knowledge. In the second phase, a Fuzzy Rule-Based Bayesian Reasoning (FRBBR) and Hugin software are applied in the new context of PTSs to assess and prioritise the local PTSs failures as one complete system. The third phase uses Analytic Hierarchy Process (AHP) in order to determine the weight of PTSs local factors. In the fourth phase, network analysis approach is used to measure the importance of petroleum ports, ships and pipelines systems globally within Petroleum Transportation Networks (PTNs). This approach can help decision makers to measure and detect the critical nodes (ports and transportation routes) within PTNs. The fifth phase uses an Evidential Reasoning (ER) approach and Intelligence Decision System (IDS) software, to assess hazards influencing on PTSs as one complete system. This research developed an advance risk-based framework applied ER approach due to its ability to combine the local/internal and global/external risk analysis results of the PTSs. To complete the cycle of this study, the best mitigating strategies are introduced and evaluated by incorporating VIseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) and AHP to rank the risk control options. The novelty of this framework provides decision makers with realistic and flexible results to ensure efficient and safe operations for PTSs

    The Visual Social Distancing Problem

    Get PDF
    One of the main and most effective measures to contain the recent viral outbreak is the maintenance of the so-called Social Distancing (SD). To comply with this constraint, workplaces, public institutions, transports and schools will likely adopt restrictions over the minimum inter-personal distance between people. Given this actual scenario, it is crucial to massively measure the compliance to such physical constraint in our life, in order to figure out the reasons of the possible breaks of such distance limitations, and understand if this implies a possible threat given the scene context. All of this, complying with privacy policies and making the measurement acceptable. To this end, we introduce the Visual Social Distancing (VSD) problem, defined as the automatic estimation of the inter-personal distance from an image, and the characterization of the related people aggregations. VSD is pivotal for a non-invasive analysis to whether people comply with the SD restriction, and to provide statistics about the level of safety of specific areas whenever this constraint is violated. We then discuss how VSD relates with previous literature in Social Signal Processing and indicate which existing Computer Vision methods can be used to manage such problem. We conclude with future challenges related to the effectiveness of VSD systems, ethical implications and future application scenarios.Comment: 9 pages, 5 figures. All the authors equally contributed to this manuscript and they are listed by alphabetical order. Under submissio

    Deep Learning-Based Machinery Fault Diagnostics

    Get PDF
    This book offers a compilation for experts, scholars, and researchers to present the most recent advancements, from theoretical methods to the applications of sophisticated fault diagnosis techniques. The deep learning methods for analyzing and testing complex mechanical systems are of particular interest. Special attention is given to the representation and analysis of system information, operating condition monitoring, the establishment of technical standards, and scientific support of machinery fault diagnosis

    Surface motion prediction and mapping for road infrastructures management by PS-InSAR measurements and machine learning algorithms

    Get PDF
    This paper introduces a methodology for predicting and mapping surface motion beneath road pavement structures caused by environmental factors. Persistent Scatterer Interferometric Synthetic Aperture Radar (PS-InSAR) measurements, geospatial analyses, and Machine Learning Algorithms (MLAs) are employed for achieving the purpose. Two single learners, i.e., Regression Tree (RT) and Support Vector Machine (SVM), and two ensemble learners, i.e., Boosted Regression Trees (BRT) and Random Forest (RF) are utilized for estimating the surface motion ratio in terms of mm/year over the Province of Pistoia (Tuscany Region, central Italy, 964 km2), in which strong subsidence phenomena have occurred. The interferometric process of 210 Sentinel-1 images from 2014 to 2019 allows exploiting the average displacements of 52,257 Persistent Scatterers as output targets to predict. A set of 29 environmental-related factors are preprocessed by SAGA-GIS, version 2.3.2, and ESRI ArcGIS, version 10.5, and employed as input features. Once the dataset has been prepared, three wrapper feature selection approaches (backward, forward, and bi-directional) are used for recognizing the set of most relevant features to be used in the modeling. A random splitting of the dataset in 70% and 30% is implemented to identify the training and test set. Through a Bayesian Optimization Algorithm (BOA) and a 10-Fold Cross-Validation (CV), the algorithms are trained and validated. Therefore, the Predictive Performance of MLAs is evaluated and compared by plotting the Taylor Diagram. Outcomes show that SVM and BRT are the most suitable algorithms; in the test phase, BRT has the highest Correlation Coefficient (0.96) and the lowest Root Mean Square Error (0.44 mm/year), while the SVM has the lowest difference between the standard deviation of its predictions (2.05 mm/year) and that of the reference samples (2.09 mm/year). Finally, algorithms are used for mapping surface motion over the study area. We propose three case studies on critical stretches of two-lane rural roads for evaluating the reliability of the procedure. Road authorities could consider the proposed methodology for their monitoring, management, and planning activities

    Recognition Situations Using Extended Dempster-Shafer Theory

    Get PDF
    Weiser’s [111] vision of pervasive computing describes a world where technology seamlessly integrates into the environment, automatically responding to peoples’ needs. Underpinning this vision is the ability of systems to automatically track the situation of a person. The task of situation recognition is critical and complex: noisy and unreliable sensor data, dynamic situations, unpredictable human behaviour and changes in the environment all contribute to the complexity. No single recognition technique is suitable in all environments. Factors such as availability of training data, ability to deal with uncertain information and transparency to the user will determine which technique to use in any particular environment. In this thesis, we propose the use of Dempster-Shafer theory as a theoretically sound basis for situation recognition - an approach that can reason with uncertainty, but which does not rely on training data. We use existing operations from Dempster-Shafer theory and create new operations to establish an evidence decision network. The network is used to generate and assess situation beliefs based on processed sensor data for an environment. We also define two specific extensions to Dempster-Shafer theory to enhance the knowledge that can be used for reasoning: 1) temporal knowledge about situation time patterns 2) quality of evidence sources (sensors) into the reasoning process. To validate the feasibility of our approach, this thesis creates evidence decision networks for two real-world data sets: a smart home data set and an officebased data set. We analyse situation recognition accuracy for each of the data sets, using the evidence decision networks with temporal/quality extensions. We also compare the evidence decision networks against two learning techniques: Naïve Bayes and J48 Decision Tree

    Towards Safe Artificial General Intelligence

    Get PDF
    The field of artificial intelligence has recently experienced a number of breakthroughs thanks to progress in deep learning and reinforcement learning. Computer algorithms now outperform humans at Go, Jeopardy, image classification, and lip reading, and are becoming very competent at driving cars and interpreting natural language. The rapid development has led many to conjecture that artificial intelligence with greater-than-human ability on a wide range of tasks may not be far. This in turn raises concerns whether we know how to control such systems, in case we were to successfully build them. Indeed, if humanity would find itself in conflict with a system of much greater intelligence than itself, then human society would likely lose. One way to make sure we avoid such a conflict is to ensure that any future AI system with potentially greater-than-human-intelligence has goals that are aligned with the goals of the rest of humanity. For example, it should not wish to kill humans or steal their resources. The main focus of this thesis will therefore be goal alignment, i.e. how to design artificially intelligent agents with goals coinciding with the goals of their designers. Focus will mainly be directed towards variants of reinforcement learning, as reinforcement learning currently seems to be the most promising path towards powerful artificial intelligence. We identify and categorize goal misalignment problems in reinforcement learning agents as designed today, and give examples of how these agents may cause catastrophes in the future. We also suggest a number of reasonably modest modifications that can be used to avoid or mitigate each identified misalignment problem. Finally, we also study various choices of decision algorithms, and conditions for when a powerful reinforcement learning system will permit us to shut it down. The central conclusion is that while reinforcement learning systems as designed today are inherently unsafe to scale to human levels of intelligence, there are ways to potentially address many of these issues without straying too far from the currently so successful reinforcement learning paradigm. Much work remains in turning the high-level proposals suggested in this thesis into practical algorithms, however

    Multi-source heterogeneous intelligence fusion

    Get PDF
    • …
    corecore