21 research outputs found
Recommended from our members
Measuring the propagation of financial distress with Granger-causality tail risk networks
Using the test of Granger-causality in tail of Hong et al. (2009), we define and construct Granger-causality tail risk networks between 33 systemically important banks (G-SIBs) and 36 sovereign bonds worldwide. Our purpose is to exploit the structure of the Granger-causality tail risk networks to identify periods of distress in financial markets and possible channels of systemic risk propagation. Combining measures of connectedness of these networks with the ratings of the sovereign bonds, we propose a flight-to-quality indicator to identify periods of turbulence in the market. Our measure clearly peaks at the onset of the European sovereign debt crisis, signaling the instability of the financial system. Finally, we use the connectedness measures of the networks to forecast the quality of sovereign bonds. We find that connectedness is a significant predictor of the cross-section of bond quality
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Recommended from our members
Non-standard errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Ground-level ozone: Evidence of increasing serial dependence in the extremes
As exposure to successive episodes of high ground-level ozone concentrations can result in larger changes in respiratory function than occasional exposure buffered by lengthy recovery periods, the analysis of extreme values in a series of ozone concentrations requires careful consideration of not only the levels of the extremes but also of any dependence appearing in the extremes of the series. Increased dependence represents increased health risks and it is thus important to detect any changes in the temporal dependence of extreme values. In this paper we establish the first test for a change point in the extremal dependence of a stationary time series. The test is flexible, easy to use and can be extended along several lines. The asymptotic distributions of our estimators and our test are established. A large simulation study verifies the good finite sample properties. The test allows us to show that there has been a significant increase in the serial dependence of the extreme levels of ground-level ozone concentrations in Bloomsbury (UK) in recent years
A characteristic function-based approach to approximate maximum likelihood estimation
<p>The choice of the summary statistics in approximate maximum likelihood is often a crucial issue. We develop a criterion for choosing the most effective summary statistic and then focus on the empirical characteristic function. In the iid setting, the approximating posterior distribution converges to the approximate distribution of the parameters conditional upon the empirical characteristic function. Simulation experiments suggest that the method is often preferable to numerical maximum likelihood. In a time-series framework, no optimality result can be proved, but the simulations indicate that the method is effective in small samples.</p
A simple approach to the estimation of Tukey's gh distribution
The Tukey's gh distribution is widely used in situations where skewness and elongation are important features of the data. As the distribution is defined through a quantile transformation of the normal, the likelihood function cannot be written in closed form and exact maximum likelihood estimation is unfeasible. In this paper we exploit a novel approach based on a frequentist reinterpretation of Approximate Bayesian Computation for approximating the maximum likelihood estimates of the gh distribution. This method is appealing because it only requires the ability to sample the distribution. We discuss the choice of the input parameters by means of simulation experiments and provide evidence of superior performance in terms of Root-Mean-Square-Error with respect to the standard quantile estimator. Finally, we give an application to operational risk measurement
Estimating large losses in insurance analytics and operational risk using the g-and-h distribution
In this paper, we study the estimation of parameters for g-and-h distributions. These distributions find applications in modeling highly skewed and fat-tailed data, like extreme losses in the banking and insurance sector. We first introduce two estimation methods: a numerical maximum likelihood technique, and an indirect inference approach with a bootstrap weighting scheme. In a realistic simulation study, we show that indirect inference is computationally more efficient and provides better estimates than the maximum likelihood method in the case of extreme features in the data. Empirical illustrations on insurance and operational losses illustrate these findings
Estimating Value-at-Risk for the g-and-h distribution: an indirect inference approach
The g-and-h distribution is able to handle well the complex behavior of loss data and applied to operational losses suggests that indirect inference estimators of VaR outperform quantile-based estimators