26,015 research outputs found

    Implementing Loss Distribution Approach for Operational Risk

    Full text link
    To quantify the operational risk capital charge under the current regulatory framework for banking supervision, referred to as Basel II, many banks adopt the Loss Distribution Approach. There are many modeling issues that should be resolved to use the approach in practice. In this paper we review the quantitative methods suggested in literature for implementation of the approach. In particular, the use of the Bayesian inference method that allows to take expert judgement and parameter uncertainty into account, modeling dependence and inclusion of insurance are discussed

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models

    Using Simulation-based Inference with Panel Data in Health Economics

    Get PDF
    Panel datasets provide a rich source of information for health economists, offering the scope to control for individual heterogeneity and to model the dynamics of individual behaviour. However the qualitative or categorical measures of outcome often used in health economics create special problems for estimating econometric models. Allowing a flexible specification of the autocorrelation induced by individual heterogeneity leads to models involving higher order integrals that cannot be handled by conventional numerical methods. The dramatic growth in computing power over recent years has been accompanied by the development of simulation-based estimators that solve this problem. This review uses binary choice models to show what can be done with conventional methods and how the range of models can be expanded by using simulation methods. Practical applications of the methods are illustrated using data on health from the British Household Panel Survey (BHPS).

    Using Simulation-Based Inference with Panel Data in Health Economics

    Get PDF
    Panel datasets provide a rich source of information for health economists, offering the scope to control for individual heterogeneity and to model the dynamics of individual behaviour. However the qualitative or categorical measures of outcome often used in health economics create special problems for estimating econometric models. Allowing a flexible specification of individual heterogeneity leads to models involving higher order integrals that cannot be handled by conventional numerical methods. The dramatic growth in computing power over recent years has been accompanied by the development of simulation estimators that solve this problem. This review uses binary choice models to show what can be done with conventional methods and how the range of models can be expanded by using simulation methods. Practical applications of the methods are illustrated using on health from the British Household Panel Survey (BHPS)Econometrics, panel data, simulation methods, determinants of health

    Evaluating probabilistic forecasts with scoringRules

    Get PDF
    Probabilistic forecasts in the form of probability distributions over future events have become popular in several fields including meteorology, hydrology, economics, and demography. In typical applications, many alternative statistical models and data sources can be used to produce probabilistic forecasts. Hence, evaluating and selecting among competing methods is an important task. The scoringRules package for R provides functionality for comparative evaluation of probabilistic models based on proper scoring rules, covering a wide range of situations in applied work. This paper discusses implementation and usage details, presents case studies from meteorology and economics, and points to the relevant background literature
    • …
    corecore