2,769 research outputs found

    Quantile regression with group lasso for classification

    Get PDF
    Applications of regression models for binary response are very common and models specific to these problems are widely used. Quantile regression for binary response data has recently attracted attention and regularized quantile regression methods have been proposed for high dimensional problems. When the predictors have a natural group structure, such as in the case of categorical predictors converted into dummy variables, then a group lasso penalty is used in regularized methods. In this paper, we present a Bayesian Gibbs sampling procedure to estimate the parameters of a quantile regression model under a group lasso penalty for classification problems with a binary response. Simulated and real data show a good performance of the proposed method in comparison to mean-based approaches and to quantile-based approaches which do not exploit the group structure of the predictors

    Variational Inference for Quantile Rgression

    Get PDF
    Quantile regression (QR) (Koenker and Bassett, 1978), is an alternative to classic lin- ear regression with extensive applications in many fields. This thesis studies Bayesian quantile regression (Yu and Moyeed, 2001) using variational inference, which is one of the alternative methods to the Markov chain Monte Carlo (MCMC) in approximating intractable posterior distributions. The lasso regularization is shown to be effective in improving the accuracy of quantile regression (Li and Zhu, 2008). This thesis developed variational inference for quantile regression and regularized quantile regression with the lasso penalty. Simulation results show that variational inference is a computationally more efficient alternative to the MCMC, while providing a comparable accuracy

    Bayesian adaptive lasso quantile regression

    Get PDF
    Recently, variable selection by penalized likelihood has attracted much research interest. In this paper, we propose adaptive Lasso quantile regression (BALQR) from a Bayesian perspective. The method extends the Bayesian Lasso quantile regression by allowing different penalization parameters for different regression coefficients. Inverse gamma prior distributions are placed on the penalty parameters. We treat the hyperparameters of the inverse gamma prior as unknowns and estimate them along with the other parameters. A Gibbs sampler is developed to simulate the parameters from the posterior distributions. Through simulation studies and analysis of a prostate cancer dataset, we compare the performance of the BALQR method proposed with six existing Bayesian and non-Bayesian methods. The simulation studies and the prostate cancer data analysis indicate that the BALQR method performs well in comparison to the other approaches

    Penalized flexible Bayesian quantile regression

    Get PDF
    Copyright © 2012 SciResThis article has been made available through the Brunel Open Access Publishing Fund.The selection of predictors plays a crucial role in building a multiple regression model. Indeed, the choice of a suitable subset of predictors can help to improve prediction accuracy and interpretation. In this paper, we propose a flexible Bayesian Lasso and adaptive Lasso quantile regression by introducing a hierarchical model framework approach to en- able exact inference and shrinkage of an unimportant coefficient to zero. The error distribution is assumed to be an infi- nite mixture of Gaussian densities. We have theoretically investigated and numerically compared our proposed methods with Flexible Bayesian quantile regression (FBQR), Lasso quantile regression (LQR) and quantile regression (QR) methods. Simulations and real data studies are conducted under different settings to assess the performance of the pro- posed methods. The proposed methods perform well in comparison to the other methods in terms of median mean squared error, mean and variance of the absolute correlation criterions. We believe that the proposed methods are useful practically

    Implicit Copulas from Bayesian Regularized Regression Smoothers

    Full text link
    We show how to extract the implicit copula of a response vector from a Bayesian regularized regression smoother with Gaussian disturbances. The copula can be used to compare smoothers that employ different shrinkage priors and function bases. We illustrate with three popular choices of shrinkage priors --- a pairwise prior, the horseshoe prior and a g prior augmented with a point mass as employed for Bayesian variable selection --- and both univariate and multivariate function bases. The implicit copulas are high-dimensional, have flexible dependence structures that are far from that of a Gaussian copula, and are unavailable in closed form. However, we show how they can be evaluated by first constructing a Gaussian copula conditional on the regularization parameters, and then integrating over these. Combined with non-parametric margins the regularized smoothers can be used to model the distribution of non-Gaussian univariate responses conditional on the covariates. Efficient Markov chain Monte Carlo schemes for evaluating the copula are given for this case. Using both simulated and real data, we show how such copula smoothing models can improve the quality of resulting function estimates and predictive distributions

    Penalized single-index quantile regression

    Get PDF
    This article is made available through the Brunel Open Access Publishing Fund. Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).The single-index (SI) regression and single-index quantile (SIQ) estimation methods product linear combinations of all the original predictors. However, it is possible that there are many unimportant predictors within the original predictors. Thus, the precision of parameter estimation as well as the accuracy of prediction will be effected by the existence of those unimportant predictors when the previous methods are used. In this article, an extension of the SIQ method of Wu et al. (2010) has been proposed, which considers Lasso and Adaptive Lasso for estimation and variable selection. Computational algorithms have been developed in order to calculate the penalized SIQ estimates. A simulation study and a real data application have been used to assess the performance of the methods under consideration
    corecore