45,672 research outputs found

    Some Remarks about the Complexity of Epidemics Management

    Full text link
    Recent outbreaks of Ebola, H1N1 and other infectious diseases have shown that the assumptions underlying the established theory of epidemics management are too idealistic. For an improvement of procedures and organizations involved in fighting epidemics, extended models of epidemics management are required. The necessary extensions consist in a representation of the management loop and the potential frictions influencing the loop. The effects of the non-deterministic frictions can be taken into account by including the measures of robustness and risk in the assessment of management options. Thus, besides of the increased structural complexity resulting from the model extensions, the computational complexity of the task of epidemics management - interpreted as an optimization problem - is increased as well. This is a serious obstacle for analyzing the model and may require an additional pre-processing enabling a simplification of the analysis process. The paper closes with an outlook discussing some forthcoming problems

    Large deviations for risk measures in finite mixture models

    Full text link
    Due to their heterogeneity, insurance risks can be properly described as a mixture of different fixed models, where the weights assigned to each model may be estimated empirically from a sample of available data. If a risk measure is evaluated on the estimated mixture instead of the (unknown) true one, then it is important to investigate the committed error. In this paper we study the asymptotic behaviour of estimated risk measures, as the data sample size tends to infinity, in the fashion of large deviations. We obtain large deviation results by applying the contraction principle, and the rate functions are given by a suitable variational formula; explicit expressions are available for mixtures of two models. Finally, our results are applied to the most common risk measures, namely the quantiles, the Expected Shortfall and the shortfall risk measures

    Regulatory solvency prediction in property-liability insurance: risk-based capital, audit ratios, and cash flow simulation

    Get PDF
    This paper analyzes the accuracy of the principal models used by U.S. insurance regulators to predict insolvencies in the property-liability insurance industry and compares these models with a relatively new solvency testing approach--cash flow simulation. Specifically, we compare the risk-based capital (RBC) system introduced by the National Association of Insurance Commissioners (NAIC) in 1994, the FAST (Financial Analysis and Surveillance Tracking) audit ratio system used by the NAIC, and a cash flow simulation model developed by the authors. Both the RBC and FAST systems are static, ratio-based approaches to solvency testing, whereas the cash flow simulation model implements dynamic financial analysis. Logistic regression analysis is used to test the models for a large sample of solvent and insolvent property-liability insurers, using data from the years 1990-1992 to predict insolvencies over three-year prediction horizons. We find that the FAST system dominates RBC as a static method for predicting insurer insolvencies. Further, we find the cash flow simulation variables add significant explanatory power to the regressions and lead to more accurate solvency prediction than the ratio-based models taken alone.Insurance industry

    Insurance policies for monetary policy in the Euro area

    Get PDF
    In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61

    Mortality-Indexed Annuities

    Get PDF
    Longevity risk has become a major challenge for governments, individuals, and annuity providers in most countries, and especially its aggregate form, i.e. the risk of unsystematic changes to general mortality patterns, bears a large potential for accumulative losses for insurers. As obvious risk management tools such as (re)insurance or hedging are less suited to manage an annuity provider’s exposure to aggregate longevity risk, the current paper proposes a new type of life annuities with benefits contingent on actual mortality experience, and it also details actuarial aspects of implementation. Similar adaptations to conventional product design exist in investment-linked annuities, and a role model for long-term contracts contingent on actual cost experience is found in German private health insurance so that the idea is not novel in general, but it is in the context of longevity risk. By not or re-transferring the systematic longevity risk insurers may avoid accumulative losses so that the primary focus in an extensive Monte-Carlo simulation is on the question of whether and to what extent such products are also advantageous for policyholders in contrast to a comparable conventional annuity product
    corecore