133 research outputs found

    Asymptotic Properties of Bayes Risk of a General Class of Shrinkage Priors in Multiple Hypothesis Testing Under Sparsity

    Full text link
    Consider the problem of simultaneous testing for the means of independent normal observations. In this paper, we study some asymptotic optimality properties of certain multiple testing rules induced by a general class of one-group shrinkage priors in a Bayesian decision theoretic framework, where the overall loss is taken as the number of misclassified hypotheses. We assume a two-groups normal mixture model for the data and consider the asymptotic framework adopted in Bogdan et al. (2011) who introduced the notion of asymptotic Bayes optimality under sparsity in the context of multiple testing. The general class of one-group priors under study is rich enough to include, among others, the families of three parameter beta, generalized double Pareto priors, and in particular the horseshoe, the normal-exponential-gamma and the Strawderman-Berger priors. We establish that within our chosen asymptotic framework, the multiple testing rules under study asymptotically attain the risk of the Bayes Oracle up to a multiplicative factor, with the constant in the risk close to the constant in the Oracle risk. This is similar to a result obtained in Datta and Ghosh (2013) for the multiple testing rule based on the horseshoe estimator introduced in Carvalho et al. (2009, 2010). We further show that under very mild assumption on the underlying sparsity parameter, the induced decision rules based on an empirical Bayes estimate of the corresponding global shrinkage parameter proposed by van der Pas et al. (2014), attain the optimal Bayes risk up to the same multiplicative factor asymptotically. We provide a unifying argument applicable for the general class of priors under study. In the process, we settle a conjecture regarding optimality property of the generalized double Pareto priors made in Datta and Ghosh (2013). Our work also shows that the result in Datta and Ghosh (2013) can be improved further

    Bayesian global-local shrinkage methods for regularisation in the high dimension linear model

    Get PDF
    This paper reviews global-local prior distributions for Bayesian inference in high-dimensional regression problems including important properties of priors and efficient Markov chain Monte Carlo methods for inference. A chemometric example in drug discovery is used to compare the predictive performance of these methods with popular methods such as Ridge and LASSO regression

    Horseshoe priors for edge-preserving linear Bayesian inversion

    Full text link
    In many large-scale inverse problems, such as computed tomography and image deblurring, characterization of sharp edges in the solution is desired. Within the Bayesian approach to inverse problems, edge-preservation is often achieved using Markov random field priors based on heavy-tailed distributions. Another strategy, popular in statistics, is the application of hierarchical shrinkage priors. An advantage of this formulation lies in expressing the prior as a conditionally Gaussian distribution depending of global and local hyperparameters which are endowed with heavy-tailed hyperpriors. In this work, we revisit the shrinkage horseshoe prior and introduce its formulation for edge-preserving settings. We discuss a sampling framework based on the Gibbs sampler to solve the resulting hierarchical formulation of the Bayesian inverse problem. In particular, one of the conditional distributions is high-dimensional Gaussian, and the rest are derived in closed form by using a scale mixture representation of the heavy-tailed hyperpriors. Applications from imaging science show that our computational procedure is able to compute sharp edge-preserving posterior point estimates with reduced uncertainty

    Incorporating Historical Models with Adaptive Bayesian Updates

    Get PDF
    This paper considers Bayesian approaches for incorporating information from a historical model into a current analysis when the historical model includes only a subset of covariates currently of interest. The statistical challenge is two-fold. First, the parameters in the nested historical model are not generally equal to their counterparts in the larger current model, neither in value nor interpretation. Second, because the historical information will not be equally informative for all parameters in the current analysis, additional regularization may be required beyond that provided by the historical information. We propose several novel extensions of the so-called power prior that adaptively combine a prior based upon the historical information with a variance-reducing prior that shrinks parameter values toward zero. The ideas are directly motivated by our work building mortality risk prediction models for pediatric patients receiving extracorporeal membrane oxygenation, or ECMO. We have developed a model on a registry-based cohort of ECMO patients and now seek to expand this model with additional biometric measurements, not available in the registry, collected on a small auxiliary cohort. Our adaptive priors are able to leverage the efficiency of the original model and identify novel mortality risk factors. We support this with a simulation study, which demonstrates the potential for efficiency gains in estimation under a variety of scenarios

    A modular framework for early-phase seamless oncology trials

    Get PDF
    Background: As our understanding of the etiology and mechanisms of cancer becomes more sophisticated and the number of therapeutic options increases, phase I oncology trials today have multiple primary objectives. Many such designs are now \u27seamless\u27, meaning that the trial estimates both the maximum tolerated dose and the efficacy at this dose level. Sponsors often proceed with further study only with this additional efficacy evidence. However, with this increasing complexity in trial design, it becomes challenging to articulate fundamental operating characteristics of these trials, such as (i) what is the probability that the design will identify an acceptable, i.e. safe and efficacious, dose level? or (ii) how many patients will be assigned to an acceptable dose level on average? Methods: In this manuscript, we propose a new modular framework for designing and evaluating seamless oncology trials. Each module is comprised of either a dose assignment step or a dose-response evaluation, and multiple such modules can be implemented sequentially. We develop modules from existing phase I/II designs as well as a novel module for evaluating dose-response using a Bayesian isotonic regression scheme. Results: We also demonstrate a freely available R package called seamlesssim to numerically estimate, by means of simulation, the operating characteristics of these modular trials. Conclusions: Together, this design framework and its accompanying simulator allow the clinical trialist to compare multiple different candidate designs, more rigorously assess performance, better justify sample sizes, and ultimately select a higher quality design
    • …
    corecore