56 research outputs found

    Accelerating Bayesian synthetic likelihood with the graphical lasso

    Get PDF
    Simulation-based Bayesian inference methods are useful when the statistical model of interest does not possess a computationally tractable likelihood function. One such likelihood-free method is approximate Bayesian computation (ABC), which approximates the likelihood of a carefully chosen summary statistic via model simulation and nonparametric density estimation. ABC is known to suffer a curse of dimensionality with respect to the size of the summary statistic. When the model summary statistic is roughly normally distributed in regions of the parameter space of interest, Bayesian synthetic likelihood (BSL), which uses a normal likelihood approximation for a summary statistic, is a useful method that can be more computationally efficient than ABC. However, BSL requires estimation of the covariance matrix of the summary statistic for each proposed parameter, which requires a large number of simulations to estimate precisely using the sample covariance matrix when the summary statistic is high dimensional. In this article, we propose to use the graphical lasso to provide a sparse estimate of the precision matrix. This approach can estimate the covariance matrix accurately with significantly fewer model simulations. We discuss the nontrivial issue of tuning parameter choice in the context of BSL and demonstrate on several complex applications that our method, which we call BSLasso, provides significant improvements in computational efficiency whilst maintaining the ability to produce similar posterior distributions to BSL. The BSL and BSLasso methods applied to the examples of this article are implemented in the BSL package in R, which is available on the Comprehensive R Archive Network. Supplemental materials for this article are available online.</p

    The effects of daily cold-water recovery and postexercise hot-water immersion on training-load tolerance during 5 days of heat-based training

    Get PDF
    PURPOSE: To examine the effects of daily cold- and hot-water recovery on training load (TL) during 5 days of heat-based training. METHODS: Eight men completed 5 days of cycle training for 60 minutes (50% peak power output) in 4 different conditions in a block counter-balanced-order design. Three conditions were completed in the heat (35°C) and 1 in a thermoneutral environment (24°C; CON). Each day after cycling, participants completed 20 minutes of seated rest (CON and heat training [HT]) or cold- (14°C; HTCWI) or hot-water (39°C; HTHWI) immersion. Heart rate, rectal temperature, and rating of perceived exertion (RPE) were collected during cycling. Session-RPE was collected 10 minutes after recovery for the determination of session-RPE TL. Data were analyzed using hierarchical regression in a Bayesian framework; Cohen d was calculated, and for session-RPE TL, the probability that d > 0.5 was also computed. RESULTS: There was evidence that session-RPE TL was increased in HTCWI (d = 2.90) and HTHWI (d = 2.38) compared with HT. The probabilities that d > 0.5 were .99 and .96, respectively. The higher session-RPE TL observed in HTCWI coincided with a greater cardiovascular (d = 2.29) and thermoregulatory (d = 2.68) response during cycling than in HT. This result was not observed for HTHWI. CONCLUSION: These findings suggest that cold-water recovery may negatively affect TL during 5 days of heat-based training, hot-water recovery could increase session-RPE TL, and the session-RPE method can detect environmental temperature-mediated increases in TL in the context of this study.</p

    Bayesian Computation with Intractable Likelihoods

    Full text link
    This article surveys computational methods for posterior inference with intractable likelihoods, that is where the likelihood function is unavailable in closed form, or where evaluation of the likelihood is infeasible. We review recent developments in pseudo-marginal methods, approximate Bayesian computation (ABC), the exchange algorithm, thermodynamic integration, and composite likelihood, paying particular attention to advancements in scalability for large datasets. We also mention R and MATLAB source code for implementations of these algorithms, where they are available.Comment: arXiv admin note: text overlap with arXiv:1503.0806

    A framework for parameter estimation and model selection from experimental data in systems biology using approximate Bayesian computation.

    Get PDF
    As modeling becomes a more widespread practice in the life sciences and biomedical sciences, researchers need reliable tools to calibrate models against ever more complex and detailed data. Here we present an approximate Bayesian computation (ABC) framework and software environment, ABC-SysBio, which is a Python package that runs on Linux and Mac OS X systems and that enables parameter estimation and model selection in the Bayesian formalism by using sequential Monte Carlo (SMC) approaches. We outline the underlying rationale, discuss the computational and practical issues and provide detailed guidance as to how the important tasks of parameter inference and model selection can be performed in practice. Unlike other available packages, ABC-SysBio is highly suited for investigating, in particular, the challenging problem of fitting stochastic models to data. In order to demonstrate the use of ABC-SysBio, in this protocol we postulate the existence of an imaginary reaction network composed of seven interrelated biological reactions (involving a specific mRNA, the protein it encodes and a post-translationally modified version of the protein), a network that is defined by two files containing 'observed' data that we provide as supplementary information. In the first part of the PROCEDURE, ABC-SysBio is used to infer the parameters of this system, whereas in the second part we use ABC-SysBio's relevant functionality to discriminate between two different reaction network models, one of them being the 'true' one. Although computationally expensive, the additional insights gained in the Bayesian formalism more than make up for this cost, especially in complex problems

    Using approximate bayesian computation to estimate transmission rates of nosocomial pathogens

    Get PDF
    In this paper, we apply a simulation based approach for estimating transmission rates of nosocomial pathogens. In particular, the objective is to infer the transmission rate between colonised health-care practitioners and uncolonised patients (and vice versa) solely from routinely collected incidence data. The method, using approximate Bayesian computation, is substantially less computer intensive and easier to implement than likelihood-based approaches we refer to here. We find through replacing the likelihood with a comparison of an efficient summary statistic between observed and simulated data that little is lost in the precision of estimated transmission rates. Furthermore, we investigate the impact of incorporating uncertainty in previously fixed parameters on the precision of the estimated transmission rates

    Approximate Bayesian computation using indirect inference

    Get PDF
    We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms by using indirect inference. ABC methods are useful for posterior inference in the presence of an intractable likelihood function. In the indirect inference approach to ABC the parameters of an auxiliary model fitted to the data become the summary statistics. Although applicable to any ABC technique, we embed this approach within a sequential Monte Carlo algorithm that is completely adaptive and requires very little tuning. This methodological development was motivated by an application involving data on macroparasite population evolution modelled by a trivariate stochastic process for which there is no tractable likelihood function. The auxiliary model here is based on a beta–binomial distribution. The main objective of the analysis is to determine which parameters of the stochastic model are estimable from the observed data on mature parasite worms

    A sequential Monte Carlo algorithm to incorporate model uncertainty in Bayesian sequential design

    Get PDF
    Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease
    • …
    corecore