6 research outputs found
Laplacian P-splines for Bayesian inference in the mixture cure model
The mixture cure model for analyzing survival data is characterized by the
assumption that the population under study is divided into a group of subjects
who will experience the event of interest over some finite time horizon and
another group of cured subjects who will never experience the event
irrespective of the duration of follow-up. When using the Bayesian paradigm for
inference in survival models with a cure fraction, it is common practice to
rely on Markov chain Monte Carlo (MCMC) methods to sample from posterior
distributions. Although computationally feasible, the iterative nature of MCMC
often implies long sampling times to explore the target space with chains that
may suffer from slow convergence and poor mixing. Furthermore, extra efforts
have to be invested in diagnostic checks to monitor the reliability of the
generated posterior samples. An alternative strategy for fast and flexible
sampling-free Bayesian inference in the mixture cure model is suggested in this
paper by combining Laplace approximations and penalized B-splines. A logistic
regression model is assumed for the cure proportion and a Cox proportional
hazards model with a P-spline approximated baseline hazard is used to specify
the conditional survival function of susceptible subjects. Laplace
approximations to the conditional latent vector are based on analytical
formulas for the gradient and Hessian of the log-likelihood, resulting in a
substantial speed-up in approximating posterior distributions. Results show
that LPSMC is an appealing alternative to classic MCMC for approximate Bayesian
inference in standard mixture cure models.Comment: 34 pages, 6 figures, 5 table
Twenty years of P-splines
P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines.Peer Reviewe
Twenty years of P-splines
P-splines first appeared in the limelight twenty years ago. Since then they have become popular in applications and in theoretical work. The combination of a rich B-spline basis and a simple difference penalty lends itself well to a variety of generalizations, because it is based on regression. In effect, P-splines allow the building of a “backbone” for the “mixing and matching” of a variety of additive smooth structure components, while inviting all sorts of extensions: varying-coefficient effects, signal (functional) regressors, two-dimensional surfaces, non-normal responses, quantile (expectile) modelling, among others. Strong connections with mixed models and Bayesian analysis have been established. We give an overview of many of the central developments during the first two decades of P-splines
Smooth semiparametric and nonparametric Bayesian estimation of bivariate densities from bivariate histogram data
Penalized B-splines combined with the composite link model are used to estimate a bivariate density from a histogram with wide bins. The goals are multiple: they include the visualization of the dependence between the two variates, but also the estimation of derived quantities like Kendall’s tau, conditional moments and quantiles. Two strategies are proposed: the first one is semiparametric with flexible margins modeled using B-splines and a parametric copula for the dependence structure; the second one is nonparametric and is based on Kronecker products of the marginal B-spline bases. Frequentist and Bayesian estimations are described. A large simulation study quantifies the performances of the two methods under different dependence structures and for varying strengths of dependence, sample sizes and amounts of grouping. It suggests that Schwarz’s BIC is a good tool for classifying the competing models. The density estimates are used to evaluate conditional quantiles in two applications in social and in medical sciences.CREATION D’OUTILS STATISTIQUES POUR L’ANALYSE DE DONNEES D’ENQUETES CENSUREES PAR INTERVALL
Smooth semiparametric and nonparametric Bayesian estimation of bivariate densities from bivariate histogram data
Penalized B-splines combined with the composite link model are used to estimate a bivariate density from a histogram with wide bins. The goals are multiple: they include the visualization of the dependence between the two variates, but also the estimation of derived quantities like Kendall's tau, conditional moments and quantiles. Two strategies are proposed: the first one is semiparametric with flexible margins modeled using B-splines and a parametric copula for the dependence structure; the second one is nonparametric and is based on Kronecker products of the marginal B-spline bases. Frequentist and Bayesian estimations are described. A large simulation study quantifies the performances of the two methods under different dependence structures and for varying strengths of dependence, sample sizes and amounts of grouping. It suggests that Schwarz's BIC is a good tool for classifying the competing models. The density estimates are used to evaluate conditional quantiles in two applications in social and in medical sciences.Grouped data Bivariate density estimation Bayesian P-splines Composite link model Copula