20,733 research outputs found

    Loss Distribution Approach for Operational Risk Capital Modelling under Basel II: Combining Different Data Sources for Risk Estimation

    Full text link
    The management of operational risk in the banking industry has undergone significant changes over the last decade due to substantial changes in operational risk environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for banking Supervision has developed a regulatory framework, referred to as Basel II, that introduced operational risk category and corresponding capital requirements. Over the past five years, major banks in most parts of the world have received accreditation under the Basel II Advanced Measurement Approach (AMA) by adopting the loss distribution approach (LDA) despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. In this paper, we review methods proposed in the literature for combining different data sources (internal data, external data and scenario analysis) which is one of the regulatory requirement for AMA

    Implementing Loss Distribution Approach for Operational Risk

    Full text link
    To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modeling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach. In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed

    A literature review on the use of expert opinion in probabilistic risk analysis

    Get PDF
    Risk assessment is part of the decision making process in many fields of discipline, such as engineering, public health, environment, program management, regulatory policy, and finance. There has been considerable debate over the philosophical and methodological treatment of risk in the past few decades, ranging from its definition and classification to methods of its assessment. Probabilistic risk analysis (PRA) specifically deals with events represented by low probabilities of occurring with high levels of unfavorable consequences. Expert judgment is often a critical source of information in PRA, since empirical data on the variables of interest are rarely available. The author reviews the literature on the use of expert opinion in PRA, in particular on the approaches to eliciting and aggregating experts'assessments. The literature suggests that the methods by which expert opinions are collected and combined have a significant effect on the resulting estimates. The author discusses two types of approaches to eliciting and aggregating expert judgments-behavioral and mathematical approaches, with the emphasis on the latter. It is generally agreed that mathematical approaches tend to yield more accurate estimates than behavioral approaches. After a short description of behavioral approaches, the author discusses mathematical approaches in detail, presenting three aggregation models: non-Bayesian axiomatic models, Bayesian models, andpsychological scaling models. She also discusses issues of stochastic dependence.Health Monitoring&Evaluation,ICT Policy and Strategies,Public Health Promotion,Enterprise Development&Reform,Statistical&Mathematical Sciences,ICT Policy and Strategies,Health Monitoring&Evaluation,Statistical&Mathematical Sciences,Science Education,Scientific Research&Science Parks

    Empirical Methodology for Crowdsourcing Ground Truth

    Full text link
    The process of gathering ground truth data through human annotation is a major bottleneck in the use of information extraction methods for populating the Semantic Web. Crowdsourcing-based approaches are gaining popularity in the attempt to solve the issues related to volume of data and lack of annotators. Typically these practices use inter-annotator agreement as a measure of quality. However, in many domains, such as event detection, there is ambiguity in the data, as well as a multitude of perspectives of the information examples. We present an empirically derived methodology for efficiently gathering of ground truth data in a diverse set of use cases covering a variety of domains and annotation tasks. Central to our approach is the use of CrowdTruth metrics that capture inter-annotator disagreement. We show that measuring disagreement is essential for acquiring a high quality ground truth. We achieve this by comparing the quality of the data aggregated with CrowdTruth metrics with majority vote, over a set of diverse crowdsourcing tasks: Medical Relation Extraction, Twitter Event Identification, News Event Extraction and Sound Interpretation. We also show that an increased number of crowd workers leads to growth and stabilization in the quality of annotations, going against the usual practice of employing a small number of annotators.Comment: in publication at the Semantic Web Journa

    Decision support model for the selection of asphalt wearing courses in highly trafficked roads

    Get PDF
    The suitable choice of the materials forming the wearing course of highly trafficked roads is a delicate task because of their direct interaction with vehicles. Furthermore, modern roads must be planned according to sustainable development goals, which is complex because some of these might be in conflict. Under this premise, this paper develops a multi-criteria decision support model based on the analytic hierarchy process and the technique for order of preference by similarity to ideal solution to facilitate the selection of wearing courses in European countries. Variables were modelled using either fuzzy logic or Monte Carlo methods, depending on their nature. The views of a panel of experts on the problem were collected and processed using the generalized reduced gradient algorithm and a distance-based aggregation approach. The results showed a clear preponderance by stone mastic asphalt over the remaining alternatives in different scenarios evaluated through sensitivity analysis. The research leading to these results was framed in the European FP7 Project DURABROADS (No. 605404).The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007–2013) under Grant Agreement No. 605404

    Operational Risk Management using a Fuzzy Logic Inference System

    Get PDF
    Operational Risk (OR) results from endogenous and exogenous risk factors, as diverse and complex to assess as human resources and technology, which may not be properly measured using traditional quantitative approaches. Engineering has faced the same challenges when designing practical solutions to complex multifactor and non-linear systems where human reasoning, expert knowledge or imprecise information are valuable inputs. One of the solutions provided by engineering is a Fuzzy Logic Inference System (FLIS). Despite the goal of the FLIS model for OR is its assessment, it is not an end in itself. The choice of a FLIS results in a convenient and sound use of qualitative and quantitative inputs, capable of effectively articulating risk management's identification, assessment, monitoring and mitigation stages. Different from traditional approaches, the proposed model allows evaluating mitigation efforts ex-ante, thus avoiding concealed OR sources from system complexity build-up and optimizing risk management resources. Furthermore, because the model contrasts effective with expected OR data, it is able to constantly validate its outcome, recognize environment shifts and issue warning signals.Operational Risk, Fuzzy Logic, Risk Management Classification JEL:G32, C63, D80
    • 

    corecore