109,328 research outputs found

    Change-Point Testing and Estimation for Risk Measures in Time Series

    Full text link
    We investigate methods of change-point testing and confidence interval construction for nonparametric estimators of expected shortfall and related risk measures in weakly dependent time series. A key aspect of our work is the ability to detect general multiple structural changes in the tails of time series marginal distributions. Unlike extant approaches for detecting tail structural changes using quantities such as tail index, our approach does not require parametric modeling of the tail and detects more general changes in the tail. Additionally, our methods are based on the recently introduced self-normalization technique for time series, allowing for statistical analysis without the issues of consistent standard error estimation. The theoretical foundation for our methods are functional central limit theorems, which we develop under weak assumptions. An empirical study of S&P 500 returns and US 30-Year Treasury bonds illustrates the practical use of our methods in detecting and quantifying market instability via the tails of financial time series during times of financial crisis

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures

    Bridging stylized facts in finance and data non-stationarities

    Full text link
    Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary patches of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of a group of statistical features (stylizes facts) of the trading volume and price fluctuations, namely the tails of each distribution, the U-shaped profile of the volume in a trading session and the evolution of the trading volume autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.Comment: 13 pages, 12 figure

    Testing for change-points in long-range dependent time series by means of a self-normalized Wilcoxon test

    Get PDF
    We propose a testing procedure based on the Wilcoxon two-sample test statistic in order to test for change-points in the mean of long-range dependent data. We show that the corresponding self-normalized test statistic converges in distribution to a non-degenerate limit under the hypothesis that no change occurred and that it diverges to infinity under the alternative of a change-point with constant height. Furthermore, we derive the asymptotic distribution of the self-normalized Wilcoxon test statistic under local alternatives, that is under the assumption that the height of the level shift decreases as the sample size increases. Regarding the finite sample performance, simulation results confirm that the self-normalized Wilcoxon test yields a consistent discrimination between hypothesis and alternative and that its empirical size is already close to the significance level for moderate sample sizes

    European Securitisation : a GARCH model of CDO, MBS and Pfandbrief spreads

    Get PDF
    Asset-backed securitisation (ABS) is an asset funding technique that involves the issuance of structured claims on the cash flow performance of a designated pool of underlying receivables. Efficient risk management and asset allocation in this growing segment of fixed income markets requires both investors and issuers to thoroughly understand the longitudinal properties of spread prices. We present a multi-factor GARCH process in order to model the heteroskedasticity of secondary market spreads for valuation and forecasting purposes. In particular, accounting for the variance of errors is instrumental in deriving more accurate estimators of time-varying forecast confidence intervals. On the basis of CDO, MBS and Pfandbrief transactions as the most important asset classes of off-balance sheet and on-balance sheet securitisation in Europe we find that expected spread changes for these asset classes tends to be level stationary with model estimates indicating asymmetric mean reversion. Furthermore, spread volatility (conditional variance) is found to follow an asymmetric stochastic process contingent on the value of past residuals. This ABS spread behaviour implies negative investor sentiment during cyclical downturns, which is likely to escape stationary approximation the longer this market situation lasts

    Range unit root tests

    Get PDF
    Since the seminal paper by Dickey and Fuller in 1979, unit-root tests have conditioned the standard approaches to analyse time series with strong serial dependence, the focus being placed in the detection of eventual unit roots in an autorregresive model fitted to the series. In this paper we propose a completely different method to test for the type of "long-wave" patterns observed not only in unit root time series but also in series following more complex data generating mechanism. To this end, our testing device analyses the trend exhibit by the data, without imposing any constraint on the generating mechanism. We call our device the Range Unit Root (RUR) Test since it is constructed from running ranges of the series. These statistics allow a more general characterization of a strong serial dependence in the mean behavior, thus endowing our test with a number of desirable properties. Among these properties are the invariance to nonlinear monotonic transformations of the series and the robustness to the presence of level shifts and additive outliers. In addition, the RUR test outperforms the power of standard unit root tests on near-unit-root stationary time series

    Studentized U-quantile processes under dependence with applications to change-point analysis

    Get PDF
    Many popular robust estimators are UU-quantiles, most notably the Hodges-Lehmann location estimator and the QnQ_n scale estimator. We prove a functional central limit theorem for the sequential UU-quantile process without any moment assumptions and under weak short-range dependence conditions. We further devise an estimator for the long-run variance and show its consistency, from which the convergence of the studentized version of the sequential UU-quantile process to a standard Brownian motion follows. This result can be used to construct CUSUM-type change-point tests based on UU-quantiles, which do not rely on bootstrapping procedures. We demonstrate this approach in detail at the example of the Hodges-Lehmann estimator for robustly detecting changes in the central location. A simulation study confirms the very good robustness and efficiency properties of the test. Two real-life data sets are analyzed

    Deformed SPDE models with an application to spatial modeling of significant wave height

    Full text link
    A non-stationary Gaussian random field model is developed based on a combination of the stochastic partial differential equation (SPDE) approach and the classical deformation method. With the deformation method, a stationary field is defined on a domain which is deformed so that the field becomes non-stationary. We show that if the stationary field is a Mat'ern field defined as a solution to a fractional SPDE, the resulting non-stationary model can be represented as the solution to another fractional SPDE on the deformed domain. By defining the model in this way, the computational advantages of the SPDE approach can be combined with the deformation method's more intuitive parameterisation of non-stationarity. In particular it allows for independent control over the non-stationary practical correlation range and the variance, which has not been possible with previously proposed non-stationary SPDE models. The model is tested on spatial data of significant wave height, a characteristic of ocean surface conditions which is important when estimating the wear and risks associated with a planned journey of a ship. The model parameters are estimated to data from the north Atlantic using a maximum likelihood approach. The fitted model is used to compute wave height exceedance probabilities and the distribution of accumulated fatigue damage for ships traveling a popular shipping route. The model results agree well with the data, indicating that the model could be used for route optimization in naval logistics.Comment: 22 pages, 12 figure
    corecore