2,920 research outputs found

    An Integrated Inverse Adaptive Neural Fuzzy System with Monte-Carlo Sampling Method for Operational Risk Management

    Get PDF
    The file attached to this record is the author's final peer reviewed version.Operational risk refers to deficiencies in processes, systems, people or external events, which may generate losses for an organization. The Basel Committee on Banking Supervision has defined different possibilities for the measurement of operational risk, although financial institutions are allowed to develop their own models to quantify operational risk. The advanced measurement approach, which is a risk-sensitive method for measuring operational risk, is the financial institutions preferred approach, among the available ones, in the expectation of having to hold less regulatory capital for covering operational risk with this approach than with alternative approaches. The advanced measurement approach includes the loss distribution approach as one way to assess operational risk. The loss distribution approach models loss distributions for business-line-risk combinations, with the regulatory capital being calculated as the 99,9% operational value at risk, a percentile of the distribution for the next year annual loss. One of the most important issues when estimating operational value at risk is related to the structure (type of distribution) and shape (long tail) of the loss distribution. The estimation of the loss distribution, in many cases, does not allow to integrate risk management and the evolution of risk; consequently, the assessment of the effects of risk impact management on loss distribution can take a long time. For this reason, this paper proposes a flexible integrated inverse adaptive fuzzy inference model, which is characterized by a Monte-Carlo behavior, that integrates the estimation of loss distribution and different risk profiles. This new model allows to see how the management of risk of an organization can evolve over time and it effects on the loss distribution used to estimate the operational value at risk. The experimental study results, reported in this paper, show the flexibility of the model in identifying (1) the structure and shape of the fuzzy input sets that represent the frequency and severity of risk; and (2) the risk profile of an organization. Therefore, the proposed model allows organizations or financial entities to assess the evolution of their risk impact management and its effect on loss distribution and operational value at risk in real time

    Stochastic logistic fuzzy maps for the construction of integrated multirates scenarios in the financing of infrastructure projects

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In general, the development of economic infrastructure systems requires a behavioural comprehensive analysis of different financial variables or rates to establish its long-term success with regards to the Equity Internal Rate of Return (EIRR) expectation. For this reason, several financial organizations have developed economic scenarios supported by computational techniques and models to identify the evolution of these financial rates. However, these models and techniques have shown a series of limitations with regard to the financial management process and its impact on EIRR over time. To address these limitations in an inclusive way, researchers have developed different approaches and methodologies focused on the development of financial models using stochastic simulation methods and computational intelligence techniques. This paper proposes a Stochastic Fuzzy Logistic Model (S-FLM) inspired by a Fuzzy Cognitive Map (FCM) structure to model financial scenarios. Where the input consists in financial rates that are characterized as linguistic rates through a series of adaptive logistic functions. The stochastic process that explains the behaviour of the financial rates over time and their partial effects on EIRR is based on a Monte Carlo sampling process carried out on the fuzzy sets that characterize each linguistic rate. The S-FLM was evaluated by applying three financing scenarios to an airport infrastructure system (pessimistic, moderate/base, optimistic), where it was possible to show the impact of different linguistic rates on the EIRR. The behaviour of the S-FLM was validated using three different models: (1) a financial management tool; (2) a general FCM without pre-loaded causalities among the variables; and (3) a Statistical S-FLM model (S-FLMS), where the causalities between the concepts or rates were obtained as a result of an independent effects analysis applying a cross modelling between variables and by using a statistical multi-linear model (statistical significance level) and a multi-linear neural model (MADALINE). The results achieved by the S-FLM show a higher EIRR than expected for each scenario. This was possible due to the incorporation of an adaptive multi-linear causality matrix and a fuzzy credibility matrix into its structure. This allowed to stabilize the effects of the financial variables or rates on the EIRR throughout a financing period. Thus, the S-FLM can be considered as a tool to model dynamic financial scenarios in different knowledge areas in a comprehensive manner. This way, overcoming the limitations imposed by the traditional computational models used to design these financial scenarios

    An Integrated Fuzzy Inference Based Monitoring, Diagnostic, and Prognostic System

    Get PDF
    To date the majority of the research related to the development and application of monitoring, diagnostic, and prognostic systems has been exclusive in the sense that only one of the three areas is the focus of the work. While previous research progresses each of the respective fields, the end result is a variable grab bag of techniques that address each problem independently. Also, the new field of prognostics is lacking in the sense that few methods have been proposed that produce estimates of the remaining useful life (RUL) of a device or can be realistically applied to real-world systems. This work addresses both problems by developing the nonparametric fuzzy inference system (NFIS) which is adapted for monitoring, diagnosis, and prognosis and then proposing the path classification and estimation (PACE) model that can be used to predict the RUL of a device that does or does not have a well defined failure threshold. To test and evaluate the proposed methods, they were applied to detect, diagnose, and prognose faults and failures in the hydraulic steering system of a deep oil exploration drill. The monitoring system implementing an NFIS predictor and sequential probability ratio test (SPRT) detector produced comparable detection rates to a monitoring system implementing an autoassociative kernel regression (AAKR) predictor and SPRT detector, specifically 80% vs. 85% for the NFIS and AAKR monitor respectively. It was also found that the NFIS monitor produced fewer false alarms. Next, the monitoring system outputs were used to generate symptom patterns for k-nearest neighbor (kNN) and NFIS classifiers that were trained to diagnose different fault classes. The NFIS diagnoser was shown to significantly outperform the kNN diagnoser, with overall accuracies of 96% vs. 89% respectively. Finally, the PACE implementing the NFIS was used to predict the RUL for different failure modes. The errors of the RUL estimates produced by the PACE-NFIS prognosers ranged from 1.2-11.4 hours with 95% confidence intervals (CI) from 0.67-32.02 hours, which are significantly better than the population based prognoser estimates with errors of ~45 hours and 95% CIs of ~162 hours

    ISBIS 2016: Meeting on Statistics in Business and Industry

    Get PDF
    This Book includes the abstracts of the talks presented at the 2016 International Symposium on Business and Industrial Statistics, held at Barcelona, June 8-10, 2016, hosted at the Universitat Politècnica de Catalunya - Barcelona TECH, by the Department of Statistics and Operations Research. The location of the meeting was at ETSEIB Building (Escola Tecnica Superior d'Enginyeria Industrial) at Avda Diagonal 647. The meeting organizers celebrated the continued success of ISBIS and ENBIS society, and the meeting draw together the international community of statisticians, both academics and industry professionals, who share the goal of making statistics the foundation for decision making in business and related applications. The Scientific Program Committee was constituted by: David Banks, Duke University Amílcar Oliveira, DCeT - Universidade Aberta and CEAUL Teresa A. Oliveira, DCeT - Universidade Aberta and CEAUL Nalini Ravishankar, University of Connecticut Xavier Tort Martorell, Universitat Politécnica de Catalunya, Barcelona TECH Martina Vandebroek, KU Leuven Vincenzo Esposito Vinzi, ESSEC Business Schoo

    Data-efficient machine learning for design and optimisation of complex systems

    Get PDF

    Applications of Probabilistic Forecasting in Smart Grids : A Review

    Get PDF
    This paper reviews the recent studies and works dealing with probabilistic forecasting models and their applications in smart grids. According to these studies, this paper tries to introduce a roadmap towards decision-making under uncertainty in a smart grid environment. In this way, it firstly discusses the common methods employed to predict the distribution of variables. Then, it reviews how the recent literature used these forecasting methods and for which uncertain parameters they wanted to obtain distributions. Unlike the existing reviews, this paper assesses several uncertain parameters for which probabilistic forecasting models have been developed. In the next stage, this paper provides an overview related to scenario generation of uncertain parameters using their distributions and how these scenarios are adopted for optimal decision-making. In this regard, this paper discusses three types of optimization problems aiming to capture uncertainties and reviews the related papers. Finally, we propose some future applications of probabilistic forecasting based on the flexibility challenges of power systems in the near future.© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    Transient identification by clustering based on Integrated Deterministic and Probabilistic Safety Analysis outcomes

    Get PDF
    open3noIn this work, we present a transient identification approach that utilizes clustering for retrieving scenarios information from an Integrated Deterministic and Probabilistic Safety Analysis (IDPSA). The approach requires: (i) creation of a database of scenarios by IDPSA; (ii) scenario post-processing for clustering Prime Implicants (PIs), i.e., minimum combinations of failure events that are capable of leading the system into a fault state, and Near Misses, i.e., combinations of failure events that lead the system to a quasi-fault state; (iii) on-line cluster assignment of an unknown developing scenario. In the step (ii), we adopt a visual interactive method and risk-based clustering to identify PIs and Near Misses, respectively; in the on-line step (iii), to assign a scenario to a cluster we consider the sequence of events in the scenario and evaluate the Hamming similarity to the sequences of the previously clustered scenarios. The feasibility of the analysis is shown with respect to the accidental scenarios of a dynamic Steam Generator (SG) of a NPP.Di Maio, Francesco; Vagnoli, Matteo; Zio, EnricoDI MAIO, Francesco; Vagnoli, Matteo; Zio, Enric

    Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs

    Get PDF
    In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies. Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency. This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance. It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality

    Fault Diagnosis and Failure Prognostics of Lithium-ion Battery based on Least Squares Support Vector Machine and Memory Particle Filter Framework

    Get PDF
    123456A novel data driven approach is developed for fault diagnosis and remaining useful life (RUL) prognostics for lithium-ion batteries using Least Square Support Vector Machine (LS-SVM) and Memory-Particle Filter (M-PF). Unlike traditional data-driven models for capacity fault diagnosis and failure prognosis, which require multidimensional physical characteristics, the proposed algorithm uses only two variables: Energy Efficiency (EE), and Work Temperature. The aim of this novel framework is to improve the accuracy of incipient and abrupt faults diagnosis and failure prognosis. First, the LSSVM is used to generate residual signal based on capacity fade trends of the Li-ion batteries. Second, adaptive threshold model is developed based on several factors including input, output model error, disturbance, and drift parameter. The adaptive threshold is used to tackle the shortcoming of a fixed threshold. Third, the M-PF is proposed as the new method for failure prognostic to determine Remaining Useful Life (RUL). The M-PF is based on the assumption of the availability of real-time observation and historical data, where the historical failure data can be used instead of the physical failure model within the particle filter. The feasibility of the framework is validated using Li-ion battery prognostic data obtained from the National Aeronautic and Space Administration (NASA) Ames Prognostic Center of Excellence (PCoE). The experimental results show the following: (1) fewer data dimensions for the input data are required compared to traditional empirical models; (2) the proposed diagnostic approach provides an effective way of diagnosing Li-ion battery fault; (3) the proposed prognostic approach can predict the RUL of Li-ion batteries with small error, and has high prediction accuracy; and, (4) the proposed prognostic approach shows that historical failure data can be used instead of a physical failure model in the particle filter
    corecore