82 research outputs found
Bayesian Tobit quantile regression using-prior distribution with ridge parameter
A Bayesian approach is proposed for coefficient estimation in the Tobit quantile regression model. The
proposed approach is based on placing a g-prior distribution depends on the quantile level on the regression
coefficients. The prior is generalized by introducing a ridge parameter to address important challenges
that may arise with censored data, such as multicollinearity and overfitting problems. Then, a stochastic
search variable selection approach is proposed for Tobit quantile regression model based on g-prior. An
expression for the hyperparameter g is proposed to calibrate the modified g-prior with a ridge parameter to
the corresponding g-prior. Some possible extensions of the proposed approach are discussed, including the
continuous and binary responses in quantile regression. The methods are illustrated using several simulation
studies and a microarray study. The simulation studies and the microarray study indicate that the proposed
approach performs well
Recommended from our members
Prior elicitation and variable selection for bayesian quantile regression
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Bayesian subset selection suffers from three important difficulties: assigning priors over model space, assigning priors to all components of the regression coefficients vector given a specific model and Bayesian computational efficiency (Chen et al., 1999). These difficulties become more challenging in Bayesian quantile regression framework when one is interested in assigning priors that depend on different quantile levels. The objective of Bayesian quantile regression (BQR), which is a newly proposed tool, is to deal with unknown parameters and model uncertainty in quantile regression (QR). However, Bayesian subset selection in quantile regression models is usually a difficult issue due to the computational challenges and nonavailability of conjugate prior distributions that are dependent on the quantile level. These challenges are rarely addressed via either penalised likelihood function or stochastic search variable selection (SSVS). These methods typically use symmetric prior distributions for regression coefficients, such as the Gaussian and Laplace, which may be suitable for median regression. However, an extreme quantile regression should have different regression coefficients from the median regression, and thus the priors for quantile regression coefficients should depend on quantiles. This thesis focuses on three challenges: assigning standard quantile dependent prior distributions for the regression coefficients, assigning suitable quantile dependent priors over model space and achieving computational efficiency. The first of these challenges is studied in Chapter 2 in which a quantile dependent prior elicitation scheme is developed. In particular, an extension of the Zellners prior which allows for a conditional conjugate prior and quantile dependent prior on Bayesian quantile regression is proposed. The prior is generalised in Chapter 3 by introducing a ridge parameter to address important challenges that may arise in some applications, such as multicollinearity and overfitting problems. The proposed prior is also used in Chapter 4 for subset selection of the fixed and random coefficients in a linear mixedeffects QR model. In Chapter 5 we specify normal-exponential prior distributions for the regression coefficients which can provide adaptive shrinkage and represent an alternative model to the Bayesian Lasso quantile regression model. For the second challenge, we assign a quantile dependent prior over model space in Chapter 2. The prior is based on the percentage bend correlation which depends on the quantile level. This prior is novel and is used in Bayesian regression for the first time. For the third challenge of computational efficiency, Gibbs samplers are derived and setup to facilitate the computation of the proposed methods. In addition to the three major aforementioned challenges this thesis also addresses other important issues such as the regularisation in quantile regression and selecting both random and fixed effects in mixed quantile regression models
Recommended from our members
Regularized and robust regression methods for high dimensional data
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Recently, variable selection in high-dimensional data has attracted much research interest. Classical stepwise subset selection methods are widely used in practice, but when the number of predictors is large these methods are difficult to implement. In these cases, modern regularization methods have become a popular choice as they perform variable selection and parameter estimation simultaneously. However, the estimation procedure becomes more difficult and challenging when the data suffer from outliers or when the assumption of normality is violated such as in the case of heavy-tailed errors. In these cases, quantile regression is the most appropriate method to use. In this thesis we combine these two classical approaches together to produce regularized quantile regression methods. Chapter 2 shows a comparative simulation study of regularized and robust regression methods when the response variable is continuous. In chapter 3, we develop a quantile regression model with a group lasso penalty for binary response data when the predictors have a grouped structure and when the data suffer from outliers. In chapter 4, we extend this method to the case of censored response variables. Numerical examples on simulated and real data are used to evaluate the performance of the proposed methods in comparisons with other existing methods
Penalized flexible Bayesian quantile regression
Copyright © 2012 SciResThis article has been made available through the Brunel Open Access Publishing Fund.The selection of predictors plays a crucial role in building a multiple regression model. Indeed, the choice of a suitable subset of predictors can help to improve prediction accuracy and interpretation. In this paper, we propose a flexible Bayesian Lasso and adaptive Lasso quantile regression by introducing a hierarchical model framework approach to en- able exact inference and shrinkage of an unimportant coefficient to zero. The error distribution is assumed to be an infi- nite mixture of Gaussian densities. We have theoretically investigated and numerically compared our proposed methods with Flexible Bayesian quantile regression (FBQR), Lasso quantile regression (LQR) and quantile regression (QR) methods. Simulations and real data studies are conducted under different settings to assess the performance of the pro- posed methods. The proposed methods perform well in comparison to the other methods in terms of median mean squared error, mean and variance of the absolute correlation criterions. We believe that the proposed methods are useful practically
Bayesian adaptive lasso quantile regression
Recently, variable selection by penalized likelihood has attracted much research interest. In this paper, we propose adaptive Lasso quantile regression (BALQR) from a Bayesian perspective. The method extends the Bayesian Lasso quantile regression by allowing different penalization parameters for different regression coefficients. Inverse gamma prior distributions are placed on the penalty parameters. We treat the hyperparameters of the inverse gamma prior as unknowns and estimate them along with the other parameters. A Gibbs sampler is developed to simulate the parameters from the posterior distributions. Through simulation studies and analysis of a prostate cancer dataset, we compare the performance of the BALQR method proposed with six existing Bayesian and non-Bayesian methods. The simulation studies and the prostate cancer data analysis indicate that the BALQR method performs well in comparison to the other approaches
A partially collapsed Gibbs sampler for Bayesian quantile regression
We introduce a set of new Gibbs sampler for Bayesian analysis of quantile re-gression model. The new algorithm, which partially collapsing an ordinary Gibbs sampler, is called Partially Collapsed Gibbs (PCG) sampler. Although the Metropolis-Hastings algorithm has been employed in Bayesian quantile regression, including
median regression, PCG has superior convergence properties to an ordinary Gibbs sampler. Moreover, Our PCG sampler algorithm, which is based on a theoretic derivation of an asymmetric Laplace as scale mixtures of normal distributions,
requires less computation than the ordinary Gibbs sampler and can significantly reduce the computation involved in approximating the Bayes Factor and marginal likelihood. Like the ordinary Gibbs sampler, the PCG sample can also be used
to calculate any associated marginal and predictive distributions. The quantile regression PCG sampler is illustrated by analysing simulated data and the data of length of stay in hospital. The latter provides new insight into hospital perfor-mance. C-code along with an R interface for our algorithms is publicly available
on request from the first author.
JEL classification: C11, C14, C21, C31, C52, C53
Prior elicitation in Bayesian quantile regression for longitudinal data
© 2011 Alhamzawi R, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original auhor and source are credited.This article has been made available through the Brunel Open Access Publishing Fund.In this paper, we introduce Bayesian quantile regression for longitudinal data in terms of informative priors and Gibbs sampling. We develop methods for eliciting prior distribution to incorporate historical data gathered from similar previous studies. The methods can be used either with no prior data or with complete prior data. The advantage of the methods is that the prior distribution is changing automatically when we change the quantile. We propose Gibbs sampling methods which are computationally efficient and easy to implement. The methods are illustrated with both simulation and real data.This article is made available through the Brunel Open Access Publishing Fund
Type I Tobit Bayesian Additive Regression Trees for Censored Outcome Regression
Censoring occurs when an outcome is unobserved beyond some threshold value.
Methods that do not account for censoring produce biased predictions of the
unobserved outcome. This paper introduces Type I Tobit Bayesian Additive
Regression Tree (TOBART-1) models for censored outcomes. Simulation results and
real data applications demonstrate that TOBART-1 produces accurate predictions
of censored outcomes. TOBART-1 provides posterior intervals for the conditional
expectation and other quantities of interest. The error term distribution can
have a large impact on the expectation of the censored outcome. Therefore the
error is flexibly modeled as a Dirichlet process mixture of normal
distributions.Comment: 51 page
- …