13 research outputs found

    A new algorithm for estimating the effective dimension-reduction subspace

    Full text link
    The statistical problem of estimating the effective dimension-reduction (EDR) subspace in the multi-index regression model with deterministic design and additive noise is considered. A new procedure for recovering the directions of the EDR subspace is proposed. Under mild assumptions, n\sqrt n-consistency of the proposed procedure is proved (up to a logarithmic factor) in the case when the structural dimension is not larger than 4. The empirical behavior of the algorithm is studied through numerical simulations

    Non-Gaussian component analysis: New ideas, new proofs, new applications

    Get PDF
    In this article, we present new ideas concerning Non-Gaussian Component Analysis (NGCA). We use the structural assumption that a high-dimensional random vector \vX can be represented as a sum of two components - a low-dimensional signal \vS and a noise component \vN. We show that this assumption enables us for a special representation for the density function of \vX. Similar facts are proven in original papers about NGCA, but our representation differs from the previous versions. The new form helps us to provide a strong theoretical support for the algorithm; moreover, it gives some ideas about new approaches in multidimensional statistical analysis. In this paper, we establish important results for the NGCA procedure using the new representation, and show benefits of our method

    Minimax testing of a composite null hypothesis defined via a quadratic functional in the model of regression

    Get PDF
    We consider the problem of testing a particular type of composite null hypothesis under a nonparametric multivariate regression model. For a given quadratic functional QQ, the null hypothesis states that the regression function ff satisfies the constraint Q[f]=0Q[f]=0, while the alternative corresponds to the functions for which Q[f]Q[f] is bounded away from zero. On the one hand, we provide minimax rates of testing and the exact separation constants, along with a sharp-optimal testing procedure, for diagonal and nonnegative quadratic functionals. We consider smoothness classes of ellipsoidal form and check that our conditions are fulfilled in the particular case of ellipsoids corresponding to anisotropic Sobolev classes. In this case, we present a closed form of the minimax rate and the separation constant. On the other hand, minimax rates for quadratic functionals which are neither positive nor negative makes appear two different regimes: "regular" and "irregular". In the "regular" case, the minimax rate is equal to n−1/4n^{-1/4} while in the "irregular" case, the rate depends on the smoothness class and is slower than in the "regular" case. We apply this to the issue of testing the equality of norms of two functions observed in noisy environments

    Test function: A new approach for covering the central subspace

    Full text link
    In this paper we offer a complete methodology for sufficient dimension reduction called the test function (TF). TF provides a new family of methods for the estimation of the central subspace (CS) based on the introduction of a nonlinear transformation of the response. Theoretical background of TF is developed under weaker conditions than the existing methods. By considering order 1 and 2 conditional moments of the predictor given the response, we divide TF in two classes. In each class we provide conditions that guarantee an exhaustive estimation of the CS. Besides, the optimal members are calculated via the minimization of the asymptotic mean squared error deriving from the distance between the CS and its estimate. This leads us to two plug-in methods which are evaluated with several simulations

    Slice inverse regression with score functions

    Get PDF
    International audienceWe consider non-linear regression problems where we assume that the response depends non-linearly on a linear projection of the covariates. We propose score function extensions to sliced inverse regression problems, both for the first-order and second-order score functions. We show that they provably improve estimation in the population case over the non-sliced versions and we study finite sample estimators and their consistency given the exact score functions. We also propose to learn the score function as well, in two steps, i.e., first learning the score function and then learning the effective dimension reduction space, or directly, by solving a convex optimization problem regularized by the nuclear norm. We illustrate our results on a series of experiments
    corecore