3 research outputs found

    Tobit Model Estimation and Sliced Inverse Regression

    Get PDF
    It is not unusual for the response variable in a regression model to be subject to censoring or truncation. Tobit regression models are a specific example of such a situation, where for some observations the observed response is not the actual response, but rather the censoring value (often zero), and an indicator that censoring (from below) has occurred. It is well-known that the maximum likelihood estimator for such a linear model (assuming Gaussian errors) is not consistent if the error term is not homoscedastic and normally distributed. In this paper we consider estimation in the Tobit regression context when those conditions do not hold, as well as when the true response is an unspecified nonlinear function of linear terms, using sliced inverse regression (SIR). The properties of SIR estimation for Tobit models are explored both theoretically and based on Monte Carlo simulations. It is shown that the SIR estimator has good properties when the usual linear model assumptions hold, and can be much more effective than other estimators when they do not. An example related to household charitable donations demonstrates the usefulness of the estimator.Statistics Working Papers Serie

    Collaborative Sliced Inverse Regression

    Get PDF
    International audienceSliced Inverse Regression (SIR) is an effective method for dimensionality reduction in high-dimensional regression problems. However, the method has requirements on the distribution of the predictors that are hard to check since they depend on unobserved variables. It has been shown that, if the distribution of the predictors is elliptical, then these requirements are satisfied.In case of mixture models, the ellipticity is violated and in addition there is no assurance of a single underlying regression model among the different components. Our approach clusterizes the predictors space to force the condition to hold on each cluster and includes a merging technique to look for different underlying models in the data. A study on simulated data as well as two real applications are provided. It appears that SIR, unsurprisingly, is not capable of dealing with a mixture of Gaussians involving different underlying models whereas our approach is able to correctly investigate the mixture

    Sparse group sufficient dimension reduction and covariance cumulative slicing estimation

    Get PDF
    This dissertation contains two main parts: In Part One, for regression problems with grouped covariates, we adopt the idea of sparse group lasso (Friedman et al., 2010) to the framework of the sufficient dimension reduction. We propose a method called the sparse group sufficient dimension reduction (sgSDR) to conduct group and within group variable selections simultaneously without assuming a specific model structure on the regression function. Simulation studies show that our method is comparable to the sparse group lasso under the regular linear model setting, and outperforms sparse group lasso with higher true positive rates and substantially lower false positive rates when the regression function is nonlinear or (and) the error distributions are non-Gaussian. One immediate application of our method is to the gene pathway data analysis where genes naturally fall into groups (pathways). An analysis of a glioblastoma microarray data is included for illustration of our method. In Part Two, for many-valued or continuous Y , the standard practice of replacing the response Y by a discrete version of Y usually results in the loss of power due to the ignorance of intra-slice information. Most of the existing slicing methods highly reply on the selection of the number of slices h. Zhu et al. (2010) proposed a method called the cumulative slicing estimation (CUME) which avoids the otherwise subjective selection of h. In this dissertation, we revisit CUME from a different perspective to gain more insights, and then refine its performance by incorporating the intra-slice covariances. The resulting new method, which we call the covariance cumulative slicing estimation (COCUM), is comparable to CUME when the predictors are normally distributed, and outperforms CUME when the predictors are non-Gaussian, especially in the existence of outliers. The asymptotic results of COCUM are also well proved. --Abstract, page iv
    corecore