1,145,894 research outputs found

    Long-run real exchange rate changes and the properties of the variance of k-differences

    Get PDF
    Engel (1999) computes the variance of k-differences for each time horizon us- ing the method of Cochrane (1988) in order to measure the importance of the traded goods component in U.S. real exchange rate movements. The importance of traded goods should decrease as the horizon increases if the law of one price holds for traded goods in the long run. However, Engel ?nds that the variance of k-di¤erences decreases only initially and then increases as k approaches the sample size. He interpets the increasing variance as evidence of an increase in the long-run importance of the traded goods component. By contrast, we show that the variance of k-di¤erences tends to return to the initial value as k approaches the sample size whether the variable is stationary or unit root nonstationary. Our results imply that the increasing variances for k-values close to the sample size cannot be inter- preted as evidence of an increase in the importance of the traded goods component in the long run. We ?nd that our test results regarding the variance of k-di¤erences are consistent with smaller importance of the traded goods component in the longer run.Real exchange rate, Variance ratio, Traded and nontraded goods

    Detection of Sparse Positive Dependence

    Full text link
    In a bivariate setting, we consider the problem of detecting a sparse contamination or mixture component, where the effect manifests itself as a positive dependence between the variables, which are otherwise independent in the main component. We first look at this problem in the context of a normal mixture model. In essence, the situation reduces to a univariate setting where the effect is a decrease in variance. In particular, a higher criticism test based on the pairwise differences is shown to achieve the detection boundary defined by the (oracle) likelihood ratio test. We then turn to a Gaussian copula model where the marginal distributions are unknown. Standard invariance considerations lead us to consider rank tests. In fact, a higher criticism test based on the pairwise rank differences achieves the detection boundary in the normal mixture model, although not in the very sparse regime. We do not know of any rank test that has any power in that regime

    The power of unit root tests under local-to-finite variance errors

    Get PDF
    We study the power of four popular unit root tests in the presence of a local-to-finite variance DGP. We characterize the asymptotic distribution of these tests under a sequence of local alternatives, considering both stationary and explosive ones . We supplement the theoretical analysis with a small simulation study to assess the finite sample power of the tests. Our results suggest that the finite sample power is affected by the alphaalpha-stable component for low values of alphaalpha and that, in the presence of this component, the DW test has the highest power under stationary alternatives. We also document a strange behavior of the DWDW test which, under the explosive alternative, suddenly falls from 1 to zero for very small changes in the autoregressive parameter suggesting a discontinuity in the power function of the DWDW test

    The great moderation: updated evidence with joint tests for multiple structural changes in variance and persistence

    Full text link
    We assess the empirical evidence about the Great Moderation using a comprehensive framework to test for multiple structural changes in the coefficients and in the variance of the error term of a linear regression model provided by Perron, Yamamoto, and Zhou (2019). We apply it to U.S. real GDP and its major components for the period 1960:1 to 2018:4. A notable feature of our approach is that we adopt an unobserved component model, allowing for two breaks in the trend function in 1973:1 and 2008:1, in order to obtain a stationary or cyclical component modelled as an autoregressive process. First, we confirm evidence about the Great Moderation, i.e., a structural change in variance of the errors in the mid-80s for the various series. Second, additional breaks in variance are found in 1970:3 for GDP and production (goods), after which the sample standard deviation increased by three times. Hence, a part of the Great Moderation can be viewed as a reversion to the pre-70s level of volatility. Third, the evidence about systematic changes in the sum of the autoregressive coefficients (a measure of persistence) is weak over the whole sample period. Finally, we find little evidence of structural changes occurring in both the variance and the coefficients following the Great Recession (2007-2008). These results support views emphasizing the good luck hypothesis as a source of the Great Moderation, which continues even after the Great Recession.First author draf

    Variance component score test for time-course gene set analysis of longitudinal RNA-seq data

    Get PDF
    As gene expression measurement technology is shifting from microarrays to sequencing, the statistical tools available for their analysis must be adapted since RNA-seq data are measured as counts. Recently, it has been proposed to tackle the count nature of these data by modeling log-count reads per million as continuous variables, using nonparametric regression to account for their inherent heteroscedasticity. Adopting such a framework, we propose tcgsaseq, a principled, model-free and efficient top-down method for detecting longitudinal changes in RNA-seq gene sets. Considering gene sets defined a priori, tcgsaseq identifies those whose expression vary over time, based on an original variance component score test accounting for both covariates and heteroscedasticity without assuming any specific parametric distribution for the transformed counts. We demonstrate that despite the presence of a nonparametric component, our test statistic has a simple form and limiting distribution, and both may be computed quickly. A permutation version of the test is additionally proposed for very small sample sizes. Applied to both simulated data and two real datasets, the proposed method is shown to exhibit very good statistical properties, with an increase in stability and power when compared to state of the art methods ROAST, edgeR and DESeq2, which can fail to control the type I error under certain realistic settings. We have made the method available for the community in the R package tcgsaseq.Comment: 23 pages, 6 figures, typo corrections & acceptance acknowledgemen

    Conditional Correlation Models of Autoregressive Conditional Heteroskedasticity with Nonstationary GARCH Equations

    Get PDF
    In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long-run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange. The results suggest that accounting for deterministic changes in the unconditional variances considerably improves the fit of the multivariate Conditional Correlation GARCH models to the data. The effect of careful specification of the variance equations on the estimated correlations is variable: in some cases rather small, in others more discernible. As a by-product, we generalize news impact surfaces to the situation in which both the GARCH equations and the conditional correlations contain a deterministic component that is a function of time.Multivariate GARCH model; Time-varying unconditional variance; Lagrange multiplier test; Modelling cycle; Nonlinear time series.

    Statistical modeling of ground motion relations for seismic hazard analysis

    Full text link
    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area-equivalence; wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp(epsilon_0) of Joyner and Boore 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions. etc

    CLASSIFICATION OF FEATURE SELECTION BASED ON ARTIFICIAL NEURAL NETWORK

    Get PDF
    Pattern recognition (PR) is the central in a variety of engineering applications. For this reason, it is indeed vital to develop efficient pattern recognition systems that facilitate decision making automatically and reliably. In this study, the implementation of PR system based on computational intelligence approach namely artificial neural network (ANN) is performed subsequent to selection of the best feature vectors. A framework to determine the best eigenvectors which we named as ‘eigenpostures’ of four main human postures specifically, standing, squatting/sitting, bending and lying based on the rules of thumb of Principal Component Analysis (PCA) has been developed. Accordingly, all three rules of PCA namely the KG-rule, Cumulative Variance and the Scree test suggest retaining only 35 main principal component or ‘eigenpostures’. Next, these ‘eigenpostures’ are statistically analyzed via Analysis of Variance (ANOVA) prior to classification. Thus, the most relevant component of the selected eigenpostures can be determined. Both categories of ‘eigenpostures’ prior to ANOVA as well as after ANOVA served as inputs to the ANN classifier to verify the effectiveness of feature selection based on statistical analysis. Results attained confirmed that the statistical analysis has enabled us to perform effectively the selection of eigenpostures for classification of four types of human postures

    A Variance Component Based Multi-marker Association Test Using Family and Unrelated Data

    Get PDF
    Background: Incorporating family data in genetic association studies has become increasingly appreciated, especially for its potential value in testing rare variants. We introduce here a variance-component based association test that can test multiple common or rare variants jointly using both family and unrelated samples. Results: The proposed approach implemented in our R package aggregates or collapses the information across a region based on genetic similarity instead of genotype scores, which avoids the power loss when the effects are in different directions or have different association strengths. The method is also able to effectively leverage the LD information in a region and it can produce a test statistic with an adaptively estimated number of degrees of freedom. Our method can readily allow for the adjustment of non-genetic contributions to the familial similarity, as well as multiple covariates. Conclusions: We demonstrate through simulations that the proposed method achieves good performance in terms of Type I error control and statistical power. The method is implemented in the R package “fassoc”, which provides a useful tool for data analysis and exploration

    Modelling Volatility by Variance Decomposition

    Get PDF
    In this paper, we propose two parametric alternatives to the standard GARCH model. They allow the variance of the model to have a smooth time-varying structure of either additive or multiplicative type. The suggested parameterisations describe both nonlinearity and structural change in the conditional and unconditional variances where the transition between regimes over time is smooth. The main focus is on the multiplicative decom- position that decomposes the variance into an unconditional and conditional component. A modelling strategy for the time-varying GARCH model based on the multiplicative decomposition of the variance is developed. It is heavily dependent on Lagrange multiplier type misspeci.cation tests. Finite-sample properties of the strategy and tests are examined by simulation. An empirical application to daily stock returns and another one to daily exchange rate returns illustrate the functioning and properties of our modelling strategy in practice. The results show that the long memory type behaviour of the sample autocorrelation functions of the absolute returns can also be explained by deterministic changes in the unconditional variance.Conditional heteroskedasticity; Structural change; Lagrange multiplier test; Misspecification test; Nonlinear time series; Time-varying parameter model.
    corecore