249 research outputs found

    Modelling multiple time series via common factors

    Get PDF
    We propose a new method for estimating common factors of multiple time series. One distinctive feature of the new approach is that it is applicable to some nonstationary time series. The unobservable (nonstationary) factors are identified via expanding the white noise space step by step; therefore solving a high-dimensional optimization problem by several low-dimensional subproblems. Asymptotic properties of the estimation were investigated. The proposed methodology was illustrated with both simulated and real data sets

    Factor modeling for high-dimensional time series: Inference for the number of factors

    Get PDF
    This paper deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. Under stationary settings, the inference is simple in the sense that both the number of factors and the factor loadings are estimated in terms of an eigenanalysis for a nonnegative definite matrix, and is therefore applicable when the dimension of time series is on the order of a few thousands. Asymptotic properties of the proposed method are investigated under two settings: (i) the sample size goes to infinity while the dimension of time series is fixed; and (ii) both the sample size and the dimension of time series go to infinity together. In particular, our estimators for zero-eigenvalues enjoy faster convergence (or slower divergence) rates, hence making the estimation for the number of factors easier. In particular, when the sample size and the dimension of time series go to infinity together, the estimators for the eigenvalues are no longer consistent. However, our estimator for the number of the factors, which is based on the ratios of the estimated eigenvalues, still works fine. Furthermore, this estimation shows the so-called "blessing of dimensionality" property in the sense that the performance of the estimation may improve when the dimension of time series increases. A two-step procedure is investigated when the factors are of different degrees of strength. Numerical illustration with both simulated and real data is also reported.Comment: Published in at http://dx.doi.org/10.1214/12-AOS970 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A bootstrap detection for operational determinism.

    Get PDF
    We propose a bootstrap detection for operationally deterministic versus stochastic nonlinear modelling and illustrate the method with both simulated and real data sets.

    Inference in components of variance models with low replication.

    Get PDF
    In components of variance models the data are viewed as arising through a sum of two random variables, representing between- and within-group variation, respectively. The former is generally interpreted as a group effect, and the latter as error. It is assumed that these variables are stochastically independent and that the distributions of the group effect and the error do not vary from one instance to another. If each group effect can be replicated a large number of times, then standard methods can be used to estimate the distributions of both the group effect and the error. This cannot be achieved without replication, however. How feasible is distribution estimation if it is not possible to replicate prolifically? Can the distributions of random effects and errors be estimated consistently from a small number of replications of each of a large number of noisy group effects, for example, in a nonparametric setting? Often extensive replication is practically infeasible, in particular, if inherently small numbers of individuals exhibit any given group effect. Yet it is quite unclear how to conduct inference in this case. We show that inference is possible, even if the number of replications is as small as 2. Two methods are proposed, both based on Fourier inversion. One, which is substantially more computer intensive than the other, exhibits better performance in numerical experiments.

    Principal component analysis for second-order stationary vector time series

    Get PDF
    We extend the principal component analysis (PCA) to second-order stationary vector time series in the sense that we seek for a contemporaneous linear transformation for a pp-variate time series such that the transformed series is segmented into several lower-dimensional subseries, and those subseries are uncorrelated with each other both contemporaneously and serially. Therefore those lower-dimensional series can be analysed separately as far as the linear dynamic structure is concerned. Technically it boils down to an eigenanalysis for a positive definite matrix. When pp is large, an additional step is required to perform a permutation in terms of either maximum cross-correlations or FDR based on multiple tests. The asymptotic theory is established for both fixed pp and diverging pp when the sample size nn tends to infinity. Numerical experiments with both simulated and real data sets indicate that the proposed method is an effective initial step in analysing multiple time series data, which leads to substantial dimension reduction in modelling and forecasting high-dimensional linear dynamical structures. Unlike PCA for independent data, there is no guarantee that the required linear transformation exists. When it does not, the proposed method provides an approximate segmentation which leads to the advantages in, for example, forecasting for future values. The method can also be adapted to segment multiple volatility processes.Comment: The original title dated back to October 2014 is "Segmenting Multiple Time Series by Contemporaneous Linear Transformation: PCA for Time Series

    High dimensional stochastic regression with latent factors, endogeneity and nonlinearity

    Get PDF
    We consider a multivariate time series model which represents a high dimensional vector process as a sum of three terms: a linear regression of some observed regressors, a linear combination of some latent and serially correlated factors, and a vector white noise. We investigate the inference without imposing stationary conditions on the target multivariate time series, the regressors and the underlying factors. Furthermore we deal with the endogeneity that there exist correlations between the observed regressors and the unobserved factors. We also consider the model with nonlinear regression term which can be approximated by a linear regression function with a large number of regressors. The convergence rates for the estimators of regression coefficients, the number of factors, factor loading space and factors are established under the settings when the dimension of time series and the number of regressors may both tend to infinity together with the sample size. The proposed method is illustrated with both simulated and real data examples

    Counterparty credit risk management: estimating extreme quantiles for a bank

    Get PDF
    Counterparty credit risk (CCR) is a complex risk to assess and banks lacked scientifically robust methods for calculating their level of potential exposure. Qiwei Yao, together with his collaborators, developed an innovative methodology for estimating counterparty credit risk, which can help banks meet regulatory requirements and calculate appropriate capital reserves
    • ā€¦
    corecore