847,582 research outputs found

    Large-sample tests of extreme-value dependence for multivariate copulas

    Full text link
    Starting from the characterization of extreme-value copulas based on max-stability, large-sample tests of extreme-value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p-values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite-sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi, Khoudraji and Rivest (1998) recently revisited by Ben Ghorbal, Genest and Neslehova (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data.Comment: 19 page

    Combining cumulative sum change-point detection tests for assessing the stationarity of univariate time series

    Full text link
    We derive tests of stationarity for univariate time series by combining change-point tests sensitive to changes in the contemporary distribution with tests sensitive to changes in the serial dependence. The proposed approach relies on a general procedure for combining dependent tests based on resampling. After proving the asymptotic validity of the combining procedure under the conjunction of null hypotheses and investigating its consistency, we study rank-based tests of stationarity by combining cumulative sum change-point tests based on the contemporary empirical distribution function and on the empirical autocopula at a given lag. Extensions based on tests solely focusing on second-order characteristics are proposed next. The finite-sample behaviors of all the derived statistical procedures for assessing stationarity are investigated in large-scale Monte Carlo experiments and illustrations on two real data sets are provided. Extensions to multivariate time series are briefly discussed as well.Comment: 45 pages, 2 figures, 10 table

    Mixed Signals Among Panel Cointegration Tests

    Get PDF
    Time series cointegration tests, even in the presence of large sample sizes, often yield conflicting conclusions (?mixed signals?) as measured by, inter alia, a low correlation of empirical p-values [see Gregory et al., 2004, Journal of Applied Econometrics]. Using their methodology, we present evidence suggesting that the problem of mixed signals persists for popular panel cointegration tests. As expected, there is weaker correlation between residual and system-based tests than between tests of the same group. --Panel cointegration tests,Monte Carlo comparison

    Effects of Applying Linear and Nonlinear Filters on Tests for Unit Roots with Additive Outliers

    Get PDF
    Conventional univariate Dickey-Fuller tests tend to produce spurious stationarity when there exist additive outlying observations in the time series. Correct critical values are usually obtained by adding dummy variables to the Dickey-Fuller regression. This is a nice theoretical result but not attractive from the empirical point of view since almost any result can be obtained just by a convenient selection of dummy variables. In this paper we suggest a robust procedure based on running Dickey-Fuller tests on the trend component instead of the original series. We provide both finite-sample and large-sample justifications. Practical implementation is illustrated through an empirical example based on the US/Finland real exchange rate series.Additive outliers, Dickey-Fuller test, Linear and nonlinear filtering, Bootstrap

    The Quality of the KombiFiD-Sample of Business Services Enterprises: Evidence from a Replication Study

    Get PDF
    This study tests whether the KombiFiD sample can be regarded as a high quality data set for empirical research on enterprises from business services industries. It performs an empirical investigation using the original data in a first step and replicates exactly this investigation using the KombiFiD sample in a second step. We find that large business services firms are oversampled in the KombiFiD agreement sample which leads to a higher share of exporting business services firms compared to the original data. After controlling for firm size and industries results based on the original data and on the KombiFiD sample are highly similar for West German firms. Therefore, the KombiFiD sample can be regarded as a sound base for empirical studies on West German firms from business services industries. For East Germany, however, the number of business services firms seems to be too small for empirical analyses, at least in the field of firm’s export participation.KombiFiD, Germany, business services firms

    Tests for exponentiality against NBUE alternatives: a Monte Carlo comparison

    Full text link
    Testing of various classes of life distributions has been addressed in the literature for more than 45 years. In this paper, we consider the problem of testing exponentiality (which essentially implies no ageing) against positive ageing which is captured by the fairly large class of new better than used in expectation (NBUE) distributions. These tests of exponentiality against NBUE alternatives are discussed and compared. The empirical size of the tests is obtained by simulations. Power comparisons for different popular alternatives are done using Monte Carlo simulations. These comparisons are made for both small and large sample sizes. The paper concludes with a discussion in which suggestions are made regarding the choices of the test when a particular alternative is suspected

    Goodness-of-fit testing based on a weighted bootstrap: A fast large-sample alternative to the parametric bootstrap

    Full text link
    The process comparing the empirical cumulative distribution function of the sample with a parametric estimate of the cumulative distribution function is known as the empirical process with estimated parameters and has been extensively employed in the literature for goodness-of-fit testing. The simplest way to carry out such goodness-of-fit tests, especially in a multivariate setting, is to use a parametric bootstrap. Although very easy to implement, the parametric bootstrap can become very computationally expensive as the sample size, the number of parameters, or the dimension of the data increase. An alternative resampling technique based on a fast weighted bootstrap is proposed in this paper, and is studied both theoretically and empirically. The outcome of this work is a generic and computationally efficient multiplier goodness-of-fit procedure that can be used as a large-sample alternative to the parametric bootstrap. In order to approximately determine how large the sample size needs to be for the parametric and weighted bootstraps to have roughly equivalent powers, extensive Monte Carlo experiments are carried out in dimension one, two and three, and for models containing up to nine parameters. The computational gains resulting from the use of the proposed multiplier goodness-of-fit procedure are illustrated on trivariate financial data. A by-product of this work is a fast large-sample goodness-of-fit procedure for the bivariate and trivariate t distribution whose degrees of freedom are fixed.Comment: 26 pages, 5 tables, 1 figur

    The Exchange Rate Exposure Puzzle

    Get PDF
    Based on basic financial models and reports in the business press, exchange rate movements are generally believed to affect the value of nonfinancial firms. In contrast, the empirical research on nonfinancial firms typically produces fewer significant exposures estimates than researchers ex-pect, independent of the sample studied and the methodology used, giving rise to a situation known as “the exposure puzzle”. This paper provides a survey of the existing research on the exposure phenomenon for nonfinancial firms. We suggest that the exposure puzzle may not be a problem of empirical methodology or sample selection as previous research has suggested, but is simply the result of the endogeneity of operative and financial hedging at the firm level. Given that empirical tests estimate exchange exposures net of corporate hedging, both, firms with low gross exposure that do not need to hedge, as well as firms with large gross exposures that employ one or several forms of hedging, may exhibit only weak exchange rate exposures net of hedging. Consequently, empirical tests yield only small percentages of firms with significant stock price exposures in almost any sample.Exposure, risk management, derivatives, corporate finance, exchange rates

    Exchange Rate Behavior under Full Monetary Equilibrium: An Empirical Analysis

    Get PDF
    This paper aims to remedy difficulties with some extant empirical tests of the monetary approach to exchange rate determination. Four problems are addressed: explication of and allowance for real exchange rate changes; imposition of interest parity; use of the forward rate as an unbiased predictor of the spot rate; and modeling implications of official intervention in foreign exchange markets and of possible efforts to sterilize effects of intervention in the monetary base. Empirical tests conducted with monthly data on the dollar-DM exchange rate from March, 1973 -December,1979 do not permit rejection of the complex joint hypothesis represented by equations estimated to test the monetary approach. Still, there remained unexplained a large portion of the behavior of the dollar-DM exchange rate in the 1973-79 monthly sample employed. This result suggests that exchange rates may be viewed as prices determined in asset markets where a large and unsystematic flow of information, not captured by monetary or other variables, produces large, unsystematic movements.
    • 

    corecore