7,622 research outputs found

    Aggregation of Nonparametric Estimators for Volatility Matrix

    Full text link
    An aggregated method of nonparametric estimators based on time-domain and state-domain estimators is proposed and studied. To attenuate the curse of dimensionality, we propose a factor modeling strategy. We first investigate the asymptotic behavior of nonparametric estimators of the volatility matrix in the time domain and in the state domain. Asymptotic normality is separately established for nonparametric estimators in the time domain and state domain. These two estimators are asymptotically independent. Hence, they can be combined, through a dynamic weighting scheme, to improve the efficiency of volatility matrix estimation. The optimal dynamic weights are derived, and it is shown that the aggregated estimator uniformly dominates volatility matrix estimators using time-domain or state-domain smoothing alone. A simulation study, based on an essentially affine model for the term structure, is conducted, and it demonstrates convincingly that the newly proposed procedure outperforms both time- and state-domain estimators. Empirical studies further endorse the advantages of our aggregated method.Comment: 46 pages, 11 PostScript figure

    Asymptotic properties for combined L1L_1 and concave regularization

    Full text link
    Two important goals of high-dimensional modeling are prediction and variable selection. In this article, we consider regularization with combined L1L_1 and concave penalties, and study the sampling properties of the global optimum of the suggested method in ultra-high dimensional settings. The L1L_1-penalty provides the minimum regularization needed for removing noise variables in order to achieve oracle prediction risk, while concave penalty imposes additional regularization to control model sparsity. In the linear model setting, we prove that the global optimum of our method enjoys the same oracle inequalities as the lasso estimator and admits an explicit bound on the false sign rate, which can be asymptotically vanishing. Moreover, we establish oracle risk inequalities for the method and the sampling properties of computable solutions. Numerical studies suggest that our method yields more stable estimates than using a concave penalty alone.Comment: 16 page

    Asymptotic equivalence of regularization methods in thresholded parameter space

    Full text link
    High-dimensional data analysis has motivated a spectrum of regularization methods for variable selection and sparse modeling, with two popular classes of convex ones and concave ones. A long debate has been on whether one class dominates the other, an important question both in theory and to practitioners. In this paper, we characterize the asymptotic equivalence of regularization methods, with general penalty functions, in a thresholded parameter space under the generalized linear model setting, where the dimensionality can grow up to exponentially with the sample size. To assess their performance, we establish the oracle inequalities, as in Bickel, Ritov and Tsybakov (2009), of the global minimizer for these methods under various prediction and variable selection losses. These results reveal an interesting phase transition phenomenon. For polynomially growing dimensionality, the L1L_1-regularization method of Lasso and concave methods are asymptotically equivalent, having the same convergence rates in the oracle inequalities. For exponentially growing dimensionality, concave methods are asymptotically equivalent but have faster convergence rates than the Lasso. We also establish a stronger property of the oracle risk inequalities of the regularization methods, as well as the sampling properties of computable solutions. Our new theoretical results are illustrated and justified by simulation and real data examples.Comment: 39 pages, 3 figure

    Non-Concave Penalized Likelihood with NP-Dimensionality

    Full text link
    Penalized likelihood methods are fundamental to ultra-high dimensional variable selection. How high dimensionality such methods can handle remains largely unknown. In this paper, we show that in the context of generalized linear models, such methods possess model selection consistency with oracle properties even for dimensionality of Non-Polynomial (NP) order of sample size, for a class of penalized likelihood approaches using folded-concave penalty functions, which were introduced to ameliorate the bias problems of convex penalty functions. This fills a long-standing gap in the literature where the dimensionality is allowed to grow slowly with the sample size. Our results are also applicable to penalized likelihood with the L1L_1-penalty, which is a convex function at the boundary of the class of folded-concave penalty functions under consideration. The coordinate optimization is implemented for finding the solution paths, whose performance is evaluated by a few simulation examples and the real data analysis.Comment: 37 pages, 2 figure

    Innovated scalable efficient estimation in ultra-large Gaussian graphical models

    Full text link
    Large-scale precision matrix estimation is of fundamental importance yet challenging in many contemporary applications for recovering Gaussian graphical models. In this paper, we suggest a new approach of innovated scalable efficient estimation (ISEE) for estimating large precision matrix. Motivated by the innovated transformation, we convert the original problem into that of large covariance matrix estimation. The suggested method combines the strengths of recent advances in high-dimensional sparse modeling and large covariance matrix estimation. Compared to existing approaches, our method is scalable and can deal with much larger precision matrices with simple tuning. Under mild regularity conditions, we establish that this procedure can recover the underlying graphical structure with significant probability and provide efficient estimation of link strengths. Both computational and theoretical advantages of the procedure are evidenced through simulation and real data examples.Comment: to appear, The Annals of Statistics (2016

    Sure Independence Screening for Ultra-High Dimensional Feature Space

    Full text link
    Variable selection plays an important role in high dimensional statistical modeling which nowadays appears in many areas and is key to various scientific discoveries. For problems of large scale or dimensionality pp, estimation accuracy and computational cost are two top concerns. In a recent paper, Candes and Tao (2007) propose the Dantzig selector using L1L_1 regularization and show that it achieves the ideal risk up to a logarithmic factor log⁑p\log p. Their innovative procedure and remarkable result are challenged when the dimensionality is ultra high as the factor log⁑p\log p can be large and their uniform uncertainty principle can fail. Motivated by these concerns, we introduce the concept of sure screening and propose a sure screening method based on a correlation learning, called the Sure Independence Screening (SIS), to reduce dimensionality from high to a moderate scale that is below sample size. In a fairly general asymptotic framework, the correlation learning is shown to have the sure screening property for even exponentially growing dimensionality. As a methodological extension, an iterative SIS (ISIS) is also proposed to enhance its finite sample performance. With dimension reduced accurately from high to below sample size, variable selection can be improved on both speed and accuracy, and can then be accomplished by a well-developed method such as the SCAD, Dantzig selector, Lasso, or adaptive Lasso. The connections of these penalized least-squares methods are also elucidated.Comment: 43 pages, 6 PostScript figure

    Asymptotic Theory of Eigenvectors for Large Random Matrices

    Full text link
    Characterizing the exact asymptotic distributions of high-dimensional eigenvectors for large structured random matrices poses important challenges yet can provide useful insights into a range of applications. To this end, in this paper we introduce a general framework of asymptotic theory of eigenvectors (ATE) for large structured symmetric random matrices with heterogeneous variances, and establish the asymptotic properties of the spiked eigenvectors and eigenvalues for the scenario of the generalized Wigner matrix noise, where the mean matrix is assumed to have the low-rank structure. Under some mild regularity conditions, we provide the asymptotic expansions for the spiked eigenvalues and show that they are asymptotically normal after some normalization. For the spiked eigenvectors, we establish novel asymptotic expansions for the general linear combination and further show that it is asymptotically normal after some normalization, where the weight vector can be arbitrary. We also provide a more general asymptotic theory for the spiked eigenvectors using the bilinear form. Simulation studies verify the validity of our new theoretical results. Our family of models encompasses many popularly used ones such as the stochastic block models with or without overlapping communities for network analysis and the topic models for text analysis, and our general theory can be exploited for statistical inference in these large-scale applications.Comment: 67 pages, 3 figure

    High dimensional thresholded regression and shrinkage effect

    Full text link
    High-dimensional sparse modeling via regularization provides a powerful tool for analyzing large-scale data sets and obtaining meaningful, interpretable models. The use of nonconvex penalty functions shows advantage in selecting important features in high dimensions, but the global optimality of such methods still demands more understanding. In this paper, we consider sparse regression with hard-thresholding penalty, which we show to give rise to thresholded regression. This approach is motivated by its close connection with the L0L_0-regularization, which can be unrealistic to implement in practice but of appealing sampling properties, and its computational advantage. Under some mild regularity conditions allowing possibly exponentially growing dimensionality, we establish the oracle inequalities of the resulting regularized estimator, as the global minimizer, under various prediction and variable selection losses, as well as the oracle risk inequalities of the hard-thresholded estimator followed by a further L2L_2-regularization. The risk properties exhibit interesting shrinkage effects under both estimation and prediction losses. We identify the optimal choice of the ridge parameter, which is shown to have simultaneous advantages to both the L2L_2-loss and prediction loss. These new results and phenomena are evidenced by simulation and real data examples.Comment: 23 pages, 3 figures, 5 table

    Dynamic Principles of Center of Mass in Human Walking

    Full text link
    We present results of an analytic and numerical calculation that studies the relationship between the time of initial foot contact and the ground reaction force of human gait and explores the dynamic principle of center of mass. Assuming the ground reaction force of both feet to be the same in the same phase of a stride cycle, we establish the relationships between the time of initial foot contact and the ground reaction force, acceleration, velocity, displacement and average kinetic energy of center of mass. We employ the dispersion to analyze the effect of the time of the initial foot contact that imposes upon these physical quantities. Our study reveals that when the time of one foot's initial contact falls right in the middle of the other foot's stride cycle, these physical quantities reach extrema. An action function has been identified as the dispersion of the physical quantities and optimized analysis used to prove the least-action principle in gait. In addition to being very significant to the research domains such as clinical diagnosis, biped robot's gait control, the exploration of this principle can simplify our understanding of the basic properties of gait.Comment: 16 pages, 5 figure

    Nonuniformity of P-values Can Occur Early in Diverging Dimensions

    Full text link
    Evaluating the joint significance of covariates is of fundamental importance in a wide range of applications. To this end, p-values are frequently employed and produced by algorithms that are powered by classical large-sample asymptotic theory. It is well known that the conventional p-values in Gaussian linear model are valid even when the dimensionality is a non-vanishing fraction of the sample size, but can break down when the design matrix becomes singular in higher dimensions or when the error distribution deviates from Gaussianity. A natural question is when the conventional p-values in generalized linear models become invalid in diverging dimensions. We establish that such a breakdown can occur early in nonlinear models. Our theoretical characterizations are confirmed by simulation studies.Comment: 23 pages including 8 figure
    • …
    corecore