9,045 research outputs found

    Improved subspace estimation for multivariate observations of high dimension: the deterministic signals case

    Full text link
    We consider the problem of subspace estimation in situations where the number of available snapshots and the observation dimension are comparable in magnitude. In this context, traditional subspace methods tend to fail because the eigenvectors of the sample correlation matrix are heavily biased with respect to the true ones. It has recently been suggested that this situation (where the sample size is small compared to the observation dimension) can be very accurately modeled by considering the asymptotic regime where the observation dimension MM and the number of snapshots NN converge to +āˆž+\infty at the same rate. Using large random matrix theory results, it can be shown that traditional subspace estimates are not consistent in this asymptotic regime. Furthermore, new consistent subspace estimate can be proposed, which outperform the standard subspace methods for realistic values of MM and NN. The work carried out so far in this area has always been based on the assumption that the observations are random, independent and identically distributed in the time domain. The goal of this paper is to propose new consistent subspace estimators for the case where the source signals are modelled as unknown deterministic signals. In practice, this allows to use the proposed approach regardless of the statistical properties of the source signals. In order to construct the proposed estimators, new technical results concerning the almost sure location of the eigenvalues of sample covariance matrices of Information plus Noise complex Gaussian models are established. These results are believed to be of independent interest.Comment: New version with minor corrections. The present paper is an extended version of a paper (same title) to appear in IEEE Trans. on Information Theor

    IMPROVED ESTIMATION STRATEGIES IN MULTIVARIATE MULTIPLE REGRESSION MODELS

    Get PDF
    The objective of this dissertation is to study properties of improved estimators of the parameters of interest in two different multivariate regression models, analogous to the fixed-X and random-X scenarios of multiple regression and compare the performance of these estimators with the usual least square estimator. In general, we study restricted versions of the multivariate regression problem based upon constraining the relationship between Y and X in some way where they may be known or unknown to the researcher prior to statistical analysis. Chapter two contains a study of the properties of improved estimation strategies for the parameters of interest in a capital asset pricing model under a general linear constraint. Asymptotic results of the suggested estimators include derivation of asymptotic bias, asymptotic mean square error, and asymptotic distributional risk. The asymptotic results demonstrate the superiority of the suggested estimation technique. A simulation study is conducted to assess the performance of the suggested estimators for large samples. Both simulation study and data example corroborate with the theoretical result. In Chapter three, we consider a multivariate multiple regression model when X is a fixed matrix. Here, we propose shrinkage and preliminary test estimation strategies for the matrix of regression parameters in the presence of a natural linear constraint. We examine the relative performances of the suggested estimators under the candidate subspace based on a quadratic risk function and the results are shown. A simulation study is conducted to compare the performance of the suggested estimators and two data examples are also presented. Our analytical and numerical results show that the suggested estimators perform better than the unrestricted estimator under the candidate subspace. In Chapter four, we consider a multivariate reduced rank regression model when X is random and we propose preliminary test and shrinkage estimation strategies. We investigate the asymptotic properties of the shrinkage and pretest estimators under a quadratic loss function and compare the performance of the suggested estimators under the candidate subspace and beyond. The methods are applied on a real data set for illustrative purposes and a simulation study is also presented

    Estimating sufficient reductions of the predictors in abundant high-dimensional regressions

    Get PDF
    We study the asymptotic behavior of a class of methods for sufficient dimension reduction in high-dimension regressions, as the sample size and number of predictors grow in various alignments. It is demonstrated that these methods are consistent in a variety of settings, particularly in abundant regressions where most predictors contribute some information on the response, and oracle rates are possible. Simulation results are presented to support the theoretical conclusion.Comment: Published in at http://dx.doi.org/10.1214/11-AOS962 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Algorithms for envelope estimation

    Full text link
    Envelopes were recently proposed as methods for reducing estimative variation in multivariate linear regression. Estimation of an envelope usually involves optimization over Grassmann manifolds. We propose a fast and widely applicable one-dimensional (1D) algorithm for estimating an envelope in general. We reveal an important structural property of envelopes that facilitates our algorithm, and we prove both Fisher consistency and root-n-consistency of the algorithm.Comment: 30 pages, 2 figures, 2 table

    Using Subspace Methods for Estimating ARMA Models for Multivariate Time Series with Conditionally Heteroskedastic Innovations

    Get PDF
    This paper deals with the estimation of linear dynamic models of the ARMA type for the conditional mean for time series with conditionally heteroskedastic innovation process widely used in modelling financial time series. Estimation is performed using subspace methods which are known to have computational advantages as compared to prediction error methods based on criterion minimization. These advantages are especially strong for high dimensional time series. The subspace methods are shown to provide consistent estimators. Moreover asymptotic equivalence to prediction error estimators in terms of the asymptotic variance is proved. Also order estimation techniques are proposed and analyzed. The estimators are not efficient as they do not model the conditional variance. Nevertheless, they can be used to obtain consistent estimators of the innovations. In a second step these estimated residuals can be used in order to levitate the problem of specifying the variance model in particular in the multi-output case. This is demonstrated in an ARCH setting, where it is proved that the estimated innovations can be used in place of the true innovations for testing in a linear least squares context in order to specify the structure of the ARCH model without changing the asymptotic distribution.Multivariate models, conditional heteroskedasticity, ARMA systems, subspace methods
    • ā€¦
    corecore