44 research outputs found

    Robust Estimation of Loss Models for Truncated and Censored Severity Data

    Full text link
    In this paper, we consider robust estimation of claim severity models in insurance, when data are affected by truncation (due to deductibles), censoring (due to policy limits), and scaling (due to coinsurance). In particular, robust estimators based on the methods of trimmed moments (T-estimators) and winsorized moments (W-estimators) are pursued and fully developed. The general definitions of such estimators are formulated and their asymptotic properties are investigated. For illustrative purposes, specific formulas for T- and W-estimators of the tail parameter of a single-parameter Pareto distribution are derived. The practical performance of these estimators is then explored using the well-known Norwegian fire claims data. Our results demonstrate that T- and W-estimators offer a robust and computationally efficient alternative to the likelihood-based inference for models that are affected by deductibles, policy limits, and coinsurance.Comment: 32 pages, 2 figure

    Method of Winsorized Moments for Robust Fitting of Truncated and Censored Lognormal Distributions

    Full text link
    When constructing parametric models to predict the cost of future claims, several important details have to be taken into account: (i) models should be designed to accommodate deductibles, policy limits, and coinsurance factors, (ii) parameters should be estimated robustly to control the influence of outliers on model predictions, and (iii) all point predictions should be augmented with estimates of their uncertainty. The methodology proposed in this paper provides a framework for addressing all these aspects simultaneously. Using payment-per-payment and payment-per-loss variables, we construct the adaptive version of method of winsorized moments (MWM) estimators for the parameters of truncated and censored lognormal distribution. Further, the asymptotic distributional properties of this approach are derived and compared with those of the maximum likelihood estimator (MLE) and method of trimmed moments (MTM) estimators. The latter being a primary competitor to MWM. Moreover, the theoretical results are validated with extensive simulation studies and risk measure sensitivity analysis. Finally, practical performance of these methods is illustrated using the well-studied data set of 1500 U.S. indemnity losses. With this real data set, it is also demonstrated that the composite models do not provide much improvement in the quality of predictive models compared to a stand-alone fitted distribution specially for truncated and censored sample data.Comment: 35 pages, 4 figures, et

    When Inflation Causes No Increase in Claim Amounts

    Get PDF
    It is well known that when (re)insurance coverages involve a deductible, the impact of inflation of loss amounts is distorted, and the changes in claims paid by the (re)insurer cannot be assumed to reflect the rate of inflation. A particularly interesting phenomenon occurs when losses follow a Pareto distribution. In this case, the observed loss amounts (those that exceed the deductible) are identically distributed from year to year even in the presence of inflation. Nevertheless, in this paper we succeed in estimating the inflation rate from the observations. We develop appropriate statistical inferential methods to quantify the inflation rate and illustrate them using simulated data. Our solution hinges on the recognition that the distribution of the number of observed losses changes from year to year depending on the inflation rate

    Fisher information matrix for the Feller-Pareto distribution

    No full text
    In this paper, the exact form of Fisher information matrix for the Feller-Pareto (FP) distribution is determined. The FP family is a very general unimodal distribution which includes a variety of distributions as special cases. For example: - A hierarchy of Pareto models: Pareto (I), Pareto (II), Pareto (III), and Pareto (IV) (see Arnold (Pareto Distributions, International Cooperative Publishing House, Fairland, MD, 1983)); and - Transformed beta family which in turn includes such general families as Burr, Generalized Pareto, and Inverse Burr (see Klugman et al. (Loss Models: From Data to Decisions, Wiley, New York, 1998)). Application of these distributions covers a wide spectrum of areas ranging from actuarial science, economics, finance to biosciences, telecommunications, and extreme value theory.Digamma function Fisher information Pareto models Transformed beta family Trigamma function

    Robust-efficient credibility models with heavy-tailed claims: A mixed linear models perspective

    No full text
    In actuarial practice, regression models serve as a popular statistical tool for analyzing insurance data and tariff ratemaking. In this paper, we consider classical credibility models that can be embedded within the framework of mixed linear models. For inference about fixed effects and variance components, likelihood-based methods such as (restricted) maximum likelihood estimators are commonly pursued. However, it is well-known that these standard and fully efficient estimators are extremely sensitive to small deviations from hypothesized normality of random components as well as to the occurrence of outliers. To obtain better estimators for premium calculation and prediction of future claims, various robust methods have been successfully adapted to credibility theory in the actuarial literature. The objective of this work is to develop robust and efficient methods for credibility when heavy-tailed claims are approximately log-location-scale distributed. To accomplish that, we first show how to express additive credibility models such as BĂĽhlmann-Straub and Hachemeister ones as mixed linear models with symmetric or asymmetric errors. Then, we adjust adaptively truncated likelihood methods and compute highly robust credibility estimates for the ordinary but heavy-tailed claims part. Finally, we treat the identified excess claims separately and find robust-efficient credibility premiums. Practical performance of this approach is examined-via simulations-under several contaminating scenarios. A widely studied real-data set from workers' compensation insurance is used to illustrate functional capabilities of the new robust credibility estimators.IB 83 IM10 IM31 IM41 IM54 Adaptive robust-efficient estimation Asymmetric heavy-tailed residuals Credibility ratemaking Mixed linear model Treatment of excess claims

    Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case

    No full text
    Over the last decade, researchers, practitioners, and regulators have had intense debates about how to treat the data collection threshold in operational risk modeling. Several approaches have been employed to fit the loss severity distribution: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises. The main objective of this paper is to understand the impact of model uncertainty on the value-at-risk (VaR) estimators. To accomplish that, we take the bank’s perspective and study a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential) and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields the lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. The advantages and disadvantages of each approach and the probability distributions under study are further investigated using a real data set for legal losses in a business unit (Cruz 2002)

    Computing and Estimating Distortion Risk Measures: How to Handle Analytically Intractable Cases?

    No full text
    In insurance data analytics and actuarial practice, distortion risk measures are used to capture the riskiness of the distribution tail. Point and interval estimates of the risk measures are then employed to price extreme events, to develop reserves, to design risk transfer strategies, and to allocate capital. Often the computation of those estimates relies on Monte Carlo simulations, which, depending upon the complexity of the problem, can be very costly in terms of required expertise and computational time. In this article, we study analytic and numerical evaluation of distortion risk measures, with the expectation that the proposed formulas or inequalities will reduce the computational burden. Specifically, we consider several distortion risk measures––value-at-risk (VaR), conditional tail expectation (cte), proportional hazards transform (pht), Wang transform (wt), and Gini shortfall (gs)––and evaluate them when the loss severity variable follows shifted exponential, Pareto I, and shifted lognormal distributions (all chosen to have the same support), which exhibit common distributional shapes of insurance losses. For these choices of risk measures and loss models, only the VaR and cte measures always possess explicit formulas. For pht, wt, and gs, there are cases when the analytic treatment of the measure is not feasible. In the latter situations, conditions under which the measure is finite are studied rigorously. In particular, we prove several theorems that specify two-sided bounds for the analytically intractable cases. The quality of the bounds is further investigated by comparing them with numerically evaluated risk measures. Finally, a simulation study involving application of those bounds in statistical estimation of the risk measures is also provided
    corecore