172 research outputs found

    Approximate Bayesian Computational methods

    Full text link
    Also known as likelihood-free methods, approximate Bayesian computational (ABC) methods have appeared in the past ten years as the most satisfactory approach to untractable likelihood problems, first in genetics then in a broader spectrum of applications. However, these methods suffer to some degree from calibration difficulties that make them rather volatile in their implementation and thus render them suspicious to the users of more traditional Monte Carlo methods. In this survey, we study the various improvements and extensions made to the original ABC algorithm over the recent years.Comment: 7 figure

    Efficient learning in ABC algorithms

    Full text link
    Approximate Bayesian Computation has been successfully used in population genetics to bypass the calculation of the likelihood. These methods provide accurate estimates of the posterior distribution by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for ensuring a suitable approximation quality of the posterior distribution are still high. To alleviate the computational burden, we propose an adaptive, sequential algorithm that runs faster than other ABC algorithms but maintains accuracy of the approximation. This proposal relies on the sequential Monte Carlo sampler of Del Moral et al. (2012) but is calibrated to reduce the number of simulations from the model. The paper concludes with numerical experiments on a toy example and on a population genetic study of Apis mellifera, where our algorithm was shown to be faster than traditional ABC schemes

    ABC random forests for Bayesian parameter inference

    Get PDF
    This preprint has been reviewed and recommended by Peer Community In Evolutionary Biology (http://dx.doi.org/10.24072/pci.evolbiol.100036). Approximate Bayesian computation (ABC) has grown into a standard methodology that manages Bayesian inference for models associated with intractable likelihood functions. Most ABC implementations require the preliminary selection of a vector of informative statistics summarizing raw data. Furthermore, in almost all existing implementations, the tolerance level that separates acceptance from rejection of simulated parameter values needs to be calibrated. We propose to conduct likelihood-free Bayesian inferences about parameters with no prior selection of the relevant components of the summary statistics and bypassing the derivation of the associated tolerance level. The approach relies on the random forest methodology of Breiman (2001) applied in a (non parametric) regression setting. We advocate the derivation of a new random forest for each component of the parameter vector of interest. When compared with earlier ABC solutions, this method offers significant gains in terms of robustness to the choice of the summary statistics, does not depend on any type of tolerance level, and is a good trade-off in term of quality of point estimator precision and credible interval estimations for a given computing time. We illustrate the performance of our methodological proposal and compare it with earlier ABC methods on a Normal toy example and a population genetics example dealing with human population evolution. All methods designed here have been incorporated in the R package abcrf (version 1.7) available on CRAN.Comment: Main text: 24 pages, 6 figures Supplementary Information: 14 pages, 5 figure

    Reliable ABC model choice via random forests

    Full text link
    Approximate Bayesian computation (ABC) methods provide an elaborate approach to Bayesian inference on complex models, including model choice. Both theoretical arguments and simulation experiments indicate, however, that model posterior probabilities may be poorly evaluated by standard ABC techniques. We propose a novel approach based on a machine learning tool named random forests to conduct selection among the highly complex models covered by ABC algorithms. We thus modify the way Bayesian model selection is both understood and operated, in that we rephrase the inferential goal as a classification problem, first predicting the model that best fits the data with random forests and postponing the approximation of the posterior probability of the predicted MAP for a second stage also relying on random forests. Compared with earlier implementations of ABC model choice, the ABC random forest approach offers several potential improvements: (i) it often has a larger discriminative power among the competing models, (ii) it is more robust against the number and choice of statistics summarizing the data, (iii) the computing effort is drastically reduced (with a gain in computation efficiency of at least fifty), and (iv) it includes an approximation of the posterior probability of the selected model. The call to random forests will undoubtedly extend the range of size of datasets and complexity of models that ABC can handle. We illustrate the power of this novel methodology by analyzing controlled experiments as well as genuine population genetics datasets. The proposed methodologies are implemented in the R package abcrf available on the CRAN.Comment: 39 pages, 15 figures, 6 table

    Likelihood-free model choice

    Get PDF
    Fan, and Beaumont (2017). Beyond exposing the potential pitfalls of ABC approximations to posterior probabilities, the review emphasizes mostly the solution proposed by [25] on the use of random forests for aggregating summary statistics and for estimating the posterior probability of the most likely model via a secondary random forest

    Nowe substancje psychoaktywne w Polsce — co lekarz powinien wiedzieć w 2019 roku?

    Get PDF
    Psychoactive substances have been around for a very long time. In the 1980’s, chemical engineers created new psychoactivesubstances as the answer to worldwide drug prohibition. So far, these legal highs have been proven to be worsethan the „classical drugs”. In this article the authors try to present the most important new psychoactive substancesincluding their groups, effects, side effects, mechanism of action, and all of those from view of medical doctor practicingin Poland. The article also presents the problem from the legal point of view and explains the most liberal approachesto illegal drugs in European Union. The need for further research is emerging.Substancje psychoaktywne towarzyszą ludzkości już wiele lat. W latach osiemdziesiątych ubiegłego wieku, chemicy stworzyli nowe substancje psychoaktywne jako odpowiedź na wprowadzoną ogólnoświatową prohibicję narkotykową. Na ten moment, legalne odpowiedniki okazały się być bardziej niebezpieczne niż „klasyczne narkotyki”. W tym artykule autorzy starają się zaprezentować najważniejsze nowe substancje psychoaktywne w tym ich grupy, efekty, działania niepożądane, mechanizm działania, a to wszystko z punktu widzenia lekarza praktyka w Polsce. Artykuł ten prezentuje również problem z punktu widzenia prawa oraz opisuje najbardziej liberalne podejście do nielegalnych narkotyków w Unii Europejskiej. Potrzeba dalszych badań jest paląca

    Bayesian computation via empirical likelihood

    Full text link
    Approximate Bayesian computation (ABC) has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the ABC parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The BCel algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.Comment: 21 pages, 12 figures, revised version of the previous version with a new titl

    Quality of life in patients with coronary artery disease treated with coronary artery bypass grafting and hybrid coronary revascularization

    Get PDF
    Background: Patients with stable coronary artery disease (CAD) have a worse quality of life (QoL) in comparison to patients without stable CAD. Standardized questionnaires are used in evaluation of QoL. Hybrid coronary revascularization (HCR) is a recently-introduced, minimally invasive option for patients requiring revascularization for coronary lesions. The aim of this study was to assess healthrelated quality of life (HRQoL) in patients with multivessel CAD (MVCAD), according to the mode of revascularization: coronary artery bypass grafting (CABG) or HCR, using the generic SF-36 v.2 questionnaire. Methods: From November 2009 to July 2012, 200 patients from POLMIDES study with diagnosed MVCAD and were referred for conventional CABG were randomized to HCR (n = 98) or CABG (n =102) groups in 1:1 ratio. HRQoL were measured at two time points: hospital admission and 12-month follow up. The primary endpoint was the difference in HRQoL after the procedure. Results: Both groups showed the same improvement of HRQoL: in HCR group: 13.5 (3.82–22.34) vs. CABG group: 10.48 (2.46–31.07); p = 0.76. Conclusions: HRQoL in patients after both modes of revascularization significantly improved after 12 months in all domains
    • …
    corecore