6,606 research outputs found

    Modelling Censored Losses Using Splicing: a Global Fit Strategy With Mixed Erlang and Extreme Value Distributions

    Full text link
    In risk analysis, a global fit that appropriately captures the body and the tail of the distribution of losses is essential. Modelling the whole range of the losses using a standard distribution is usually very hard and often impossible due to the specific characteristics of the body and the tail of the loss distribution. A possible solution is to combine two distributions in a splicing model: a light-tailed distribution for the body which covers light and moderate losses, and a heavy-tailed distribution for the tail to capture large losses. We propose a splicing model with a mixed Erlang (ME) distribution for the body and a Pareto distribution for the tail. This combines the flexibility of the ME distribution with the ability of the Pareto distribution to model extreme values. We extend our splicing approach for censored and/or truncated data. Relevant examples of such data can be found in financial risk analysis. We illustrate the flexibility of this splicing model using practical examples from risk measurement

    Modeling extreme values of processes observed at irregular time steps: Application to significant wave height

    Get PDF
    This work is motivated by the analysis of the extremal behavior of buoy and satellite data describing wave conditions in the North Atlantic Ocean. The available data sets consist of time series of significant wave height (Hs) with irregular time sampling. In such a situation, the usual statistical methods for analyzing extreme values cannot be used directly. The method proposed in this paper is an extension of the peaks over threshold (POT) method, where the distribution of a process above a high threshold is approximated by a max-stable process whose parameters are estimated by maximizing a composite likelihood function. The efficiency of the proposed method is assessed on an extensive set of simulated data. It is shown, in particular, that the method is able to describe the extremal behavior of several common time series models with regular or irregular time sampling. The method is then used to analyze Hs data in the North Atlantic Ocean. The results indicate that it is possible to derive realistic estimates of the extremal properties of Hs from satellite data, despite its complex space--time sampling.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS711 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Likelihood estimators for multivariate extremes

    Full text link
    The main approach to inference for multivariate extremes consists in approximating the joint upper tail of the observations by a parametric family arising in the limit for extreme events. The latter may be expressed in terms of componentwise maxima, high threshold exceedances or point processes, yielding different but related asymptotic characterizations and estimators. The present paper clarifies the connections between the main likelihood estimators, and assesses their practical performance. We investigate their ability to estimate the extremal dependence structure and to predict future extremes, using exact calculations and simulation, in the case of the logistic model

    Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies

    Full text link
    Lung cancer is among the most common cancers in the United States, in terms of incidence and mortality. In 2009, it is estimated that more than 150,000 deaths will result from lung cancer alone. Genetic information is an extremely valuable data source in characterizing the personal nature of cancer. Over the past several years, investigators have conducted numerous association studies where intensive genetic data is collected on relatively few patients compared to the numbers of gene predictors, with one scientific goal being to identify genetic features associated with cancer recurrence or survival. In this note, we propose high-dimensional survival analysis through a new application of boosting, a powerful tool in machine learning. Our approach is based on an accelerated lifetime model and minimizing the sum of pairwise differences in residuals. We apply our method to a recent microarray study of lung adenocarcinoma and find that our ensemble is composed of 19 genes, while a proportional hazards (PH) ensemble is composed of nine genes, a proper subset of the 19-gene panel. In one of our simulation scenarios, we demonstrate that PH boosting in a misspecified model tends to underfit and ignore moderately-sized covariate effects, on average. Diagnostic analyses suggest that the PH assumption is not satisfied in the microarray data and may explain, in part, the discrepancy in the sets of active coefficients. Our simulation studies and comparative data analyses demonstrate how statistical learning by PH models alone is insufficient.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS426 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Warranty Data Analysis: A Review

    Get PDF
    Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA

    Quality assurance of rectal cancer diagnosis and treatment - phase 3 : statistical methods to benchmark centres on a set of quality indicators

    Get PDF
    In 2004, the Belgian Section for Colorectal Surgery, a section of the Royal Belgian Society for Surgery, decided to start PROCARE (PROject on CAncer of the REctum), a multidisciplinary, profession-driven and decentralized project with as main objectives the reduction of diagnostic and therapeutic variability and improvement of outcome in patients with rectal cancer. All medical specialties involved in the care of rectal cancer established a multidisciplinary steering group in 2005. They agreed to approach the stated goal by means of treatment standardization through guidelines, implementation of these guidelines and quality assurance through registration and feedback. In 2007, the PROCARE guidelines were updated (Procare Phase I, KCE report 69). In 2008, a set of 40 process and outcome quality of care indicators (QCI) was developed and organized into 8 domains of care: general, diagnosis/staging, neoadjuvant treatment, surgery, adjuvant treatment, palliative treatment, follow-up and histopathologic examination. These QCIs were tested on the prospective PROCARE database and on an administrative (claims) database (Procare Phase II, KCE report 81). Afterwards, 4 QCIs were added by the PROCARE group. Centres have been receiving feedback from the PROCARE registry on these QCIs with a description of the distribution of the unadjusted centre-averaged observed measures and the centre’s position therein. To optimize this feedback, centres should ideally be informed of their risk-adjusted outcomes and be given some benchmarks. The PROCARE Phase III study is devoted to developing a methodology to achieve this feedback

    CONSISTENT ESTIMATION OF LONGITUDINAL CENSORED DEMAND SYSTEMS

    Get PDF
    In this paper we derive a joint continuous/censored demand system suitable for the analysis of commodity demand relationships using panel data. Unobserved heterogeneity is controlled for using a correlated random effects specification and a Generalized Method of Moments framework used to estimate the model in two stages. While relatively small differences in elasticity estimates are found between a flexible specification and one that restricts the relationship between the random effect and budget shares to be time invariant, larger differences are observed between the most flexible random effects model and a pooled cross sectional estimator. The results suggest the limited ability of such estimators to control for preference heterogeneity and unit value endogeneity leads to parameter bias.Research Methods/ Statistical Methods,
    • …
    corecore