225 research outputs found

    "A correlation sensitivity analysis of non-life underwriting risk in solvency capital requirement estimation"

    Get PDF
    This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.Solvency II, Solvency Capital Requirement, Standard Model, Internal Model, Monte Carlo simulation, Copulas JEL classification:C53

    A correlation sensitivity analysis of non-life underwriting risk in solvency capital requirement estimation [WP]

    Get PDF
    This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement -SCR-, under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation

    Spatial And Temporal Modeling Of Radar Rainfall Uncertainties

    Get PDF
    It is widely acknowledged that radar-based estimates of rainfall are affected by uncertainties (e.g., mis-calibration, beam blockage, anomalous propagation, and ground clutter) which are both systematic and random in nature. Improving the characterization of these errors would yield better understanding and interpretations of results from studies in which these estimates are used as inputs (e.g., hydrologic modeling) or initial conditions (e.g., rainfall forecasting). Building on earlier efforts, the authors apply a data-driven multiplicative model in which the relationship between true rainfall and radar rainfall can be described in terms of the product of a systematic and random component. The systematic component accounts for conditional biases. The conditional bias is approximated by a power-law function. The random component, which represents the random fluctuations remaining after correcting for systematic uncertainties, is characterized in terms of its probability distribution as well as its spatial and temporal dependencies. The space-time dependencies are computed using the non-parametric Kendall\u27s τ measure. For the first time, the authors present a methodology based on conditional copulas to generate ensembles of random error fields with the prescribed marginal probability distribution and spatio-temporal dependencies. The methodology is illustrated using data from Clear Creek, which is a densely instrumented experimental watershed in eastern Iowa. Results are based on three years of radar data from the Davenport Weather Surveillance Radar 88 Doppler (WSR-88D) radar that were processed through the Hydro-NEXRAD system. The spatial and temporal resolutions are 0.5. km and hourly, respectively, and the radar data are complemented by rainfall measurements from 11 rain gages, located within the catchment, which are used to approximate true ground rainfall. © 2013 Elsevier B.V

    A review of applied methods in Europe for flood-frequency analysis in a changing environment

    Get PDF
    The report presents a review of methods used in Europe for trend analysis, climate change projections and non-stationary analysis of extreme precipitation and flood frequency. In addition, main findings of the analyses are presented, including a comparison of trend analysis results and climate change projections. Existing guidelines in Europe on design flood and design rainfall estimation that incorporate climate change are reviewed. The report concludes with a discussion of research needs on non-stationary frequency analysis for considering the effects of climate change and inclusion in design guidelines. Trend analyses are reported for 21 countries in Europe with results for extreme precipitation, extreme streamflow or both. A large number of national and regional trend studies have been carried out. Most studies are based on statistical methods applied to individual time series of extreme precipitation or extreme streamflow using the non-parametric Mann-Kendall trend test or regression analysis. Some studies have been reported that use field significance or regional consistency tests to analyse trends over larger areas. Some of the studies also include analysis of trend attribution. The studies reviewed indicate that there is some evidence of a general increase in extreme precipitation, whereas there are no clear indications of significant increasing trends at regional or national level of extreme streamflow. For some smaller regions increases in extreme streamflow are reported. Several studies from regions dominated by snowmelt-induced peak flows report decreases in extreme streamflow and earlier spring snowmelt peak flows. Climate change projections have been reported for 14 countries in Europe with results for extreme precipitation, extreme streamflow or both. The review shows various approaches for producing climate projections of extreme precipitation and flood frequency based on alternative climate forcing scenarios, climate projections from available global and regional climate models, methods for statistical downscaling and bias correction, and alternative hydrological models. A large number of the reported studies are based on an ensemble modelling approach that use several climate forcing scenarios and climate model projections in order to address the uncertainty on the projections of extreme precipitation and flood frequency. Some studies also include alternative statistical downscaling and bias correction methods and hydrological modelling approaches. Most studies reviewed indicate an increase in extreme precipitation under a future climate, which is consistent with the observed trend of extreme precipitation. Hydrological projections of peak flows and flood frequency show both positive and negative changes. Large increases in peak flows are reported for some catchments with rainfall-dominated peak flows, whereas a general decrease in flood magnitude and earlier spring floods are reported for catchments with snowmelt-dominated peak flows. The latter is consistent with the observed trends. The review of existing guidelines in Europe on design floods and design rainfalls shows that only few countries explicitly address climate change. These design guidelines are based on climate change adjustment factors to be applied to current design estimates and may depend on design return period and projection horizon. The review indicates a gap between the need for considering climate change impacts in design and actual published guidelines that incorporate climate change in extreme precipitation and flood frequency. Most of the studies reported are based on frequency analysis assuming stationary conditions in a certain time window (typically 30 years) representing current and future climate. There is a need for developing more consistent non-stationary frequency analysis methods that can account for the transient nature of a changing climate

    Non-linear dependences in finance

    Full text link
    The thesis is composed of three parts. Part I introduces the mathematical and statistical tools that are relevant for the study of dependences, as well as statistical tests of Goodness-of-fit for empirical probability distributions. I propose two extensions of usual tests when dependence is present in the sample data and when observations have a fat-tailed distribution. The financial content of the thesis starts in Part II. I present there my studies regarding the "cross-sectional" dependences among the time series of daily stock returns, i.e. the instantaneous forces that link several stocks together and make them behave somewhat collectively rather than purely independently. A calibration of a new factor model is presented here, together with a comparison to measurements on real data. Finally, Part III investigates the temporal dependences of single time series, using the same tools and measures of correlation. I propose two contributions to the study of the origin and description of "volatility clustering": one is a generalization of the ARCH-like feedback construction where the returns are self-exciting, and the other one is a more original description of self-dependences in terms of copulas. The latter can be formulated model-free and is not specific to financial time series. In fact, I also show here how concepts like recurrences, records, aftershocks and waiting times, that characterize the dynamics in a time series can be written in the unifying framework of the copula.Comment: PhD Thesi

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods

    On sample selection models and skew distributions

    Get PDF
    This thesis is concerned with methods for dealing with missing data in nonrandom samples and recurrent events data. The first part of this thesis is motivated by scores arising from questionnaires which often follow asymmetric distributions, on a fixed range. This can be due to scores clustering at one end of the scale or selective reporting. Sometimes, the scores are further subjected to sample selection resulting in partial observability. Thus, methods based on complete cases for skew data are inadequate for the analysis of such data and a general sample selection model is required. Heckman proposed a full maximum likelihood estimation method under the normality assumption for sample selection problems, and parametric and non-parametric extensions have been proposed. A general selection distribution for a vector Y 2 Rp has a PDF fY given by fY(y) = fY?(y) P(S? 2 CjY? = y) P(S? 2 C) ; where S? 2 Rq and Y? 2 Rp are two random vectors, and C is a measurable subset of Rq. We use this generalization to develop a sample selection model with underlying skew-normal distribution. A link is established between the continuous component of our model log-likelihood function and an extended version of a generalized skewnormal distribution. This link is used to derive the expected value of the model, which extends Heckman's two-step method. The general selection distribution is also used to establish the closed skew-normal distribution as the continuous component of the usual multilevel sample selection models. Finite sample performances of the maximum likelihood estimator of the models are studied via Monte Carlo simulation. The model parameters are more precisely estimated under the new models, even in the presence of moderate to extreme skewness, than the Heckman selection models. Application to data from a study of neck injuries where the responses are substantially skew successfully discriminates between selection and inherent skewness, and the multilevel model is used to analyze jointly unit and item non-response. We also discuss computational and identification issues, and provide an extension of the model using copula-based sample selection models with truncated marginals. The second part of this thesis is motivated by studies that seek to analyze processes that generate events repeatedly over time. We consider the number of events per subject within a specified study period as the primary outcome of interest. One considerable challenge in the analysis of this type of data is the large proportion of patients that might discontinue before the end of the study, leading to partially observed data. Sophisticated sensitivity analyses tools are therefore necessary for the analysis of such data. We propose the use of two frequentist based imputation methods for dealing with missing data in recurrent event data framework. The recurrent events are modeled as over-dispersed Poisson data, with constant rate function. Different assumptions about future behavior of dropouts depending on reasons for dropout and treatment received are made and evaluated in a simulation study. We illustrate our approach with a clinical trial in patients who suffer from bladder cancer
    • 

    corecore