90 research outputs found

    Principal Fitted Components for Dimension Reduction in Regression

    Full text link
    We provide a remedy for two concerns that have dogged the use of principal components in regression: (i) principal components are computed from the predictors alone and do not make apparent use of the response, and (ii) principal components are not invariant or equivariant under full rank linear transformation of the predictors. The development begins with principal fitted components [Cook, R. D. (2007). Fisher lecture: Dimension reduction in regression (with discussion). Statist. Sci. 22 1--26] and uses normal models for the inverse regression of the predictors on the response to gain reductive information for the forward regression of interest. This approach includes methodology for testing hypotheses about the number of components and about conditional independencies among the predictors.Comment: Published in at http://dx.doi.org/10.1214/08-STS275 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the level-slope-curvature effect in yield curves and eventual total positivity

    Get PDF
    Principal components analysis has become widely used in a variety of fields. In finance and, more specifically, in the theory of interest rate derivative modeling, its use has been pioneered by Litterman and Scheinkman [J. Fixed Income, 1 (1991), pp. 54--61]. Their key finding was that a few components explain most of the variance of treasury zero-coupon rates and that the first three eigenvectors represent level, slope, and curvature (LSC) changes on the curve. This result has been, since then, observed in various markets. Over the years, there have been several attempts at modeling correlation matrices displaying the observed effects as well as trying to understand what properties of those matrices are responsible for them. Using recent results of the theory of total positiveness [O. Kushel, Matrices with Totally Positive Powers and Their Generalizations, 2014], we characterize these matrices and, as an application, we shed light on the critique to the methodology raised by Lekkos [J. Derivatives, 8 (2000), pp. 72--83].Fil: Forzani, Liliana Maria. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe. Instituto de Matemática Aplicada del Litoral. Universidad Nacional del Litoral. Instituto de Matemática Aplicada del Litoral; ArgentinaFil: Tolmasky, Carlos F.. University of Minnesota; Estados Unido

    Estimating sufficient reductions of the predictors in abundant high-dimensional regressions

    Get PDF
    We study the asymptotic behavior of a class of methods for sufficient dimension reduction in high-dimension regressions, as the sample size and number of predictors grow in various alignments. It is demonstrated that these methods are consistent in a variety of settings, particularly in abundant regressions where most predictors contribute some information on the response, and oracle rates are possible. Simulation results are presented to support the theoretical conclusion.Comment: Published in at http://dx.doi.org/10.1214/11-AOS962 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the maximal function for the generalized Ornstein-Uhlenbeck semigroup

    Full text link
    In this note we consider the maximal function for the generalized Ornstein-Uhlenbeck semigroup in \RR associated with the generalized Hermite polynomials {Hnμ}\{H_n^{\mu}\} and prove that it is weak type (1,1) with respect to dλμ(x)=∣x∣2μe−∣x∣2dx,d\lambda_{\mu}(x) = |x|^{2\mu}e^{-|x|^2} dx, for μ>−1/2\mu >-1/2 as well as bounded on Lp(dλμ)L^p(d\lambda_\mu) for p>1p>1Comment: 10 pages. See also http://euler.ciens.ucv.ve/~wurbina/preprints.htm
    • …
    corecore