181,169 research outputs found

    Early Warning Models for Banking Supervision in Romania

    Get PDF
    In this paper we propose an early warning system for the Romanian banking sector, as an addition to the standardized CAAMPL rating system used by the National Bank of Romania for assessing the local credit institutions. We aim to find the determinants for downgrades as well as for a bank to have a weak overall position, to estimate the respective probabilities and to be able to perform rating predictions. Having this purpose, we build two models with binary dependent variables and one ordered logistic model that accounts for all possible future ratings. One result is that indicators for current position, market share, profitability and assets quality determine rating downgrades, whereas capital adequacy, liquidity and macroeconomic environment are not represented in the model. Banks that will have a weak overall position in one year can be predicted using also indicators for current position, market share, profitability and assets quality, as well as, in this case, capital adequacy and macroeconomic environment, the latter only for the binary dependent variable model, leaving liquidity indicators out again. Based on the ordered logistic model’s capacity for rating prediction, we estimated one year horizon scores and ratings for each bank and we aggregated these results for predicting a measure of assessing the local banking sector as a whole.early warning system, CAAMPL rating system

    PuMA: Bayesian analysis of partitioned (and unpartitioned) model adequacy

    Get PDF
    The accuracy of Bayesian phylogenetic inference using molecular data depends on the use of proper models of sequence evolution. Although choosing the best model available from a pool of alternatives has become standard practice in statistical phylogenetics, assessment of the chosen model\u27s adequacy is rare. Programs for Bayesian phylogenetic inference have recently begun to implement models of sequence evolution that account for heterogeneity across sites beyond variation in rates of evolution, yet no program exists to assess the adequacy of these models. PuMA implements a posterior predictive simulation approach to assessing the adequacy of partitioned, unpartitioned and mixture models of DNA sequence evolution in a Bayesian context. Assessment of model adequacy allows empirical phylogeneticists to have appropriate confidence in their results and guides efforts to improve models of sequence evolution. © The Author 2008. Published by Oxford University Press. All rights reserved

    Modelling the kinetics of thermal inactivation of apple polyphenoloxidase

    Get PDF
    The enzymatic browning of fruits and vegetables caused by mechanical injury during postharvest storage or processing is initiated by the catalytic action of polyphenoloxidase (PPO). A bleaching treatment prior to processing is still considered mostly effective in inhibiting the catalytic activity of PPO, and thus controlling undesirable enzymatic browning. In this work, different mathematical routines were assessed in terms of their adequacy to describe the thermal inactivation of PPO from Golden apples over a range of temperatures from 62.5 to 72.5 ÂșC. The classical approach to kinetic modelling of the decay activity of apple PPO, commonly reported to follow a first-order model, employs a two-step procedure, in which the model parameters are individually obtained, by each temperature studied, using non-linear or linear regressions. Thereafter, the estimated parameters are further used to calculate their temperature dependence. Alternatively, a one-step method provides a regression fit to all experimental data sets, with the temperature dependence equation being directly built in the kinetic model. This fitting technique thus, (a) avoids the estimation of intermediate parameters and, (b) substantially increases the degrees of freedom and hence the precision of parameters’ estimates. Within this issue was further explored the logarithmic transformation of the mathematical equations used on the adequacy of the model to describe experimental data. In all cases non-weighted least-squares regression procedures were used. Both the examination and criticism of the current modelling strategies were done by assessing statistical data obtained, such as the confidence intervals of the estimates, correlation coefficients, sum of squares, and residuals normality

    Don't know, can't know: Embracing deeper uncertainties when analysing risks

    Get PDF
    This article is available open access through the publisher’s website at the link below. Copyright @ 2011 The Royal Society.Numerous types of uncertainty arise when using formal models in the analysis of risks. Uncertainty is best seen as a relation, allowing a clear separation of the object, source and ‘owner’ of the uncertainty, and we argue that all expressions of uncertainty are constructed from judgements based on possibly inadequate assumptions, and are therefore contingent. We consider a five-level structure for assessing and communicating uncertainties, distinguishing three within-model levels—event, parameter and model uncertainty—and two extra-model levels concerning acknowledged and unknown inadequacies in the modelling process, including possible disagreements about the framing of the problem. We consider the forms of expression of uncertainty within the five levels, providing numerous examples of the way in which inadequacies in understanding are handled, and examining criticisms of the attempts taken by the Intergovernmental Panel on Climate Change to separate the likelihood of events from the confidence in the science. Expressing our confidence in the adequacy of the modelling process requires an assessment of the quality of the underlying evidence, and we draw on a scale that is widely used within evidence-based medicine. We conclude that the contingent nature of risk-modelling needs to be explicitly acknowledged in advice given to policy-makers, and that unconditional expressions of uncertainty remain an aspiration

    Graphical diagnostics to check model misspecification for the proportional odds regression model

    Full text link
    The cumulative logit or the proportional odds regression model is commonly used to study covariate effects on ordinal responses. This paper provides some graphical and numerical methods for checking the adequacy of the proportional odds regression model. The methods focus on evaluating functional misspecification for specific covariate effects, but misspecification of the link function can also be dealt with under the same framework. For the logistic regression model with binary responses, Arbogast and Lin ( Statist. Med. 2005; 24 :229–247) developed similar graphical and numerical methods for assessing the adequacy of the model using the cumulative sums of residuals. The paper generalizes their methods to ordinal responses and illustrates them using an example from the VA Normative Aging Study. Simulation studies comparing the performance of the different diagnostic methods indicate that some of the graphical methods are more powerful in detecting model misspecification than the Hosmer–Lemeshow-type goodness-of-fit statistics for the class of models studied. Copyright © 2008 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/61528/1/3386_ftp.pd

    TEASMA: A Practical Approach for the Test Assessment of Deep Neural Networks using Mutation Analysis

    Full text link
    Successful deployment of Deep Neural Networks (DNNs), particularly in safety-critical systems, requires their validation with an adequate test set to ensure a sufficient degree of confidence in test outcomes. Mutation analysis, one of the main techniques for measuring test adequacy in traditional software, has been adapted to DNNs in recent years. This technique is based on generating mutants that aim to be representative of actual faults and thus can be used for test adequacy assessment. In this paper, we investigate for the first time whether mutation operators that directly modify the trained DNN model (i.e., post-training) can be used for reliably assessing the test inputs of DNNs. We propose and evaluate TEASMA, an approach based on post-training mutation for assessing the adequacy of DNN's test sets. In practice, TEASMA allows engineers to decide whether they will be able to trust test results and thus validate the DNN before its deployment. Based on a DNN model's training set, TEASMA provides a methodology to build accurate prediction models of the Fault Detection Rate (FDR) of a test set from its mutation score, thus enabling its assessment. Our large empirical evaluation, across multiple DNN models, shows that predicted FDR values have a strong linear correlation (R2 >= 0.94) with actual values. Consequently, empirical evidence suggests that TEASMA provides a reliable basis for confidently deciding whether to trust test results or improve the test set
    • 

    corecore