166 research outputs found

    Robust bootstrap procedures for the chain-ladder method

    Get PDF
    Insurers are faced with the challenge of estimating the future reserves needed to handle historic and outstanding claims that are not fully settled. A well-known and widely used technique is the chain-ladder method, which is a deterministic algorithm. To include a stochastic component one may apply generalized linear models to the run-off triangles based on past claims data. Analytical expressions for the standard deviation of the resulting reserve estimates are typically difficult to derive. A popular alternative approach to obtain inference is to use the bootstrap technique. However, the standard procedures are very sensitive to the possible presence of outliers. These atypical observations, deviating from the pattern of the majority of the data, may both inflate or deflate traditional reserve estimates and corresponding inference such as their standard errors. Even when paired with a robust chain-ladder method, classical bootstrap inference may break down. Therefore, we discuss and implement several robust bootstrap procedures in the claims reserving framework and we investigate and compare their performance on both simulated and real data. We also illustrate their use for obtaining the distribution of one year risk measures

    Generalized Spherical Principal Component Analysis

    Full text link
    Outliers contaminating data sets are a challenge to statistical estimators. Even a small fraction of outlying observations can heavily influence most classical statistical methods. In this paper we propose generalized spherical principal component analysis, a new robust version of principal component analysis that is based on the generalized spatial sign covariance matrix. Supporting theoretical properties of the proposed method including influence functions, breakdown values and asymptotic efficiencies are studied, and a simulation study is conducted to compare our new method to existing methods. We also propose an adjustment of the generalized spatial sign covariance matrix to achieve better Fisher consistency properties. We illustrate that generalized spherical principal component analysis, depending on a chosen radial function, has both great robustness and efficiency properties in addition to a low computational cost

    The Leaky Integrating Threshold and its impact on evidence accumulation models of choice RT

    Get PDF
    A common assumption in choice response time (RT) modeling is that after evidence accumulation reaches a certain decision threshold, the choice is categorically communicated to the motor system that then executes the response. However, neurophysiological findings suggest that motor preparation partly overlaps with evidence accumulation, and is not independent from stimulus difficulty level. We propose to model this entanglement by changing the nature of the decision criterion from a simple threshold to an actual process. More specifically, we propose a secondary, motor preparation related, leaky accumulation process that takes the accumulated evidence of the original decision process as a continuous input, and triggers the actual response when it reaches its own threshold. We analytically develop this Leaky Integrating Threshold (LIT), applying it to a simple constant drift diffusion model, and show how its parameters can be estimated with the D*M method. Reanalyzing 3 different data sets, the LIT extension is shown to outperform a standard drift diffusion model using multiple statistical approaches. Further, the LIT leak parameter is shown to be better at explaining the speed/accuracy trade-off manipulation than the commonly used boundary separation parameter. These improvements can also be verified using traditional diffusion model analyses, for which the LIT predicts the violation of several common selective parameter influence assumptions. These predictions are consistent with what is found in the data and with what is reported experimentally in the literature. Crucially, this work offers a new benchmark against which to compare neural data to offer neurobiological validation for the proposed processes
    corecore