193 research outputs found
Approximate Bayesian Computational methods
Also known as likelihood-free methods, approximate Bayesian computational
(ABC) methods have appeared in the past ten years as the most satisfactory
approach to untractable likelihood problems, first in genetics then in a
broader spectrum of applications. However, these methods suffer to some degree
from calibration difficulties that make them rather volatile in their
implementation and thus render them suspicious to the users of more traditional
Monte Carlo methods. In this survey, we study the various improvements and
extensions made to the original ABC algorithm over the recent years.Comment: 7 figure
Efficient learning in ABC algorithms
Approximate Bayesian Computation has been successfully used in population
genetics to bypass the calculation of the likelihood. These methods provide
accurate estimates of the posterior distribution by comparing the observed
dataset to a sample of datasets simulated from the model. Although
parallelization is easily achieved, computation times for ensuring a suitable
approximation quality of the posterior distribution are still high. To
alleviate the computational burden, we propose an adaptive, sequential
algorithm that runs faster than other ABC algorithms but maintains accuracy of
the approximation. This proposal relies on the sequential Monte Carlo sampler
of Del Moral et al. (2012) but is calibrated to reduce the number of
simulations from the model. The paper concludes with numerical experiments on a
toy example and on a population genetic study of Apis mellifera, where our
algorithm was shown to be faster than traditional ABC schemes
ABC random forests for Bayesian parameter inference
This preprint has been reviewed and recommended by Peer Community In
Evolutionary Biology (http://dx.doi.org/10.24072/pci.evolbiol.100036).
Approximate Bayesian computation (ABC) has grown into a standard methodology
that manages Bayesian inference for models associated with intractable
likelihood functions. Most ABC implementations require the preliminary
selection of a vector of informative statistics summarizing raw data.
Furthermore, in almost all existing implementations, the tolerance level that
separates acceptance from rejection of simulated parameter values needs to be
calibrated. We propose to conduct likelihood-free Bayesian inferences about
parameters with no prior selection of the relevant components of the summary
statistics and bypassing the derivation of the associated tolerance level. The
approach relies on the random forest methodology of Breiman (2001) applied in a
(non parametric) regression setting. We advocate the derivation of a new random
forest for each component of the parameter vector of interest. When compared
with earlier ABC solutions, this method offers significant gains in terms of
robustness to the choice of the summary statistics, does not depend on any type
of tolerance level, and is a good trade-off in term of quality of point
estimator precision and credible interval estimations for a given computing
time. We illustrate the performance of our methodological proposal and compare
it with earlier ABC methods on a Normal toy example and a population genetics
example dealing with human population evolution. All methods designed here have
been incorporated in the R package abcrf (version 1.7) available on CRAN.Comment: Main text: 24 pages, 6 figures Supplementary Information: 14 pages, 5
figure
Reliable ABC model choice via random forests
Approximate Bayesian computation (ABC) methods provide an elaborate approach
to Bayesian inference on complex models, including model choice. Both
theoretical arguments and simulation experiments indicate, however, that model
posterior probabilities may be poorly evaluated by standard ABC techniques. We
propose a novel approach based on a machine learning tool named random forests
to conduct selection among the highly complex models covered by ABC algorithms.
We thus modify the way Bayesian model selection is both understood and
operated, in that we rephrase the inferential goal as a classification problem,
first predicting the model that best fits the data with random forests and
postponing the approximation of the posterior probability of the predicted MAP
for a second stage also relying on random forests. Compared with earlier
implementations of ABC model choice, the ABC random forest approach offers
several potential improvements: (i) it often has a larger discriminative power
among the competing models, (ii) it is more robust against the number and
choice of statistics summarizing the data, (iii) the computing effort is
drastically reduced (with a gain in computation efficiency of at least fifty),
and (iv) it includes an approximation of the posterior probability of the
selected model. The call to random forests will undoubtedly extend the range of
size of datasets and complexity of models that ABC can handle. We illustrate
the power of this novel methodology by analyzing controlled experiments as well
as genuine population genetics datasets. The proposed methodologies are
implemented in the R package abcrf available on the CRAN.Comment: 39 pages, 15 figures, 6 table
Likelihood-free model choice
Fan, and Beaumont (2017). Beyond exposing the potential pitfalls of ABC approximations to posterior probabilities, the review emphasizes mostly the solution proposed by [25] on the use of random forests for aggregating summary statistics and for estimating the posterior probability of the most likely model via a secondary random forest
Bayesian computation via empirical likelihood
Approximate Bayesian computation (ABC) has become an essential tool for the
analysis of complex stochastic models when the likelihood function is
numerically unavailable. However, the well-established statistical method of
empirical likelihood provides another route to such settings that bypasses
simulations from the model and the choices of the ABC parameters (summary
statistics, distance, tolerance), while being convergent in the number of
observations. Furthermore, bypassing model simulations may lead to significant
time savings in complex models, for instance those found in population
genetics. The BCel algorithm we develop in this paper also provides an
evaluation of its own performance through an associated effective sample size.
The method is illustrated using several examples, including estimation of
standard distributions, time series, and population genetics models.Comment: 21 pages, 12 figures, revised version of the previous version with a
new titl
KINEMATIC AND DYNAMIC ANALYSIS OF THE ROWER'S GESTURE ON CONCEPT II ERGOMETER
INTRODUCTION : Biomechanics Studies of rowing, remain most of the time global and consider the gesture as an indivisible whole (classification in style(DAL MONTE 89), coefficient of efficiency(ZATSIORSKY 91), peak of force on the handle (HARTMANN 93)). We plan to consider the gesture as the result of an elementary movements succession(movement of legs, movement of the trunk, movement of arms). Therefore the evaluation of the gesture efficiency depends on the study organization of these movements. The method used was the morphological analysis of kinematic and dynamic variable. An original experimental device has been elaborated. It consists of an optoelectronic system and a Concept II ergometer with of force and torque transducers. The population was a group of three rowers : a beginner, a regional level rower and a female rower of French team. After a period of warming of few minutes, the experimentation consisted in rowing during 20 minutes. The order was to row the furthest possible. The acquisition has been carried out for the first 5minutes.RESULTS : The first results show, for the three subjects, that the developed force on the handle cancels each other out before the end of the propulsion. This corresponds to a inefficiency phase of the gesture of the rower. A thorough morphological analysis shows that this phase is synchronized with a fall of the speed of the handle. Nevertheless, during this phase, the elbow angular speed is maximal. Consequently. During this phase, the contribution of arm is inefficient. The rower does not manageto increase the speed of the handle anymore. In addition, a comparative analysis between the three rowers is presented. It is based on inter-limb angular variable study and on effort delivered by the feet and the hands. The angular variable analysis shows a movement stereotyped for skilled rower. This confirmed that the expert's gestures are an automatism. Moreover, the increase of the force, applied on the feet strechers, carried out by the female rower, during the recovery, was delated, comparatively with the others rowers. The female rower controls her recovery. As this force does not make the boat further, the analysis of this variable shows as inefficient phase for the beginner and the regional rower. CONCLUSION : Kinematic and dynamic analysis of the rower gesture allowed to find 2 ineffective phases : the first during the end of the propulsion and the second during the end of the recovery. REFERENCES :DAL MONTE 89 : Dal Monte A,, Komor A.,Rowing and Sculling Mechanics, Article, Biomechanics of sport, Vaughan C.L.,ISBN : 0-8493-6820-0, 1989ZATSIORSKY 91 : Zatsiorsky V., YakuninN., Mechanics and Biomechanics of Rowing : TO review, International Newspaper of sport biomechanics, p229-281, 1991HARTMANN 93 : Hartmann U., Mader A.,Wasser K., Klauer I., Peak Forces,Velocity, and Power During Five and Maximal Ten Rowing Ergometer Strokesby World Class Female and Pain Rowers, Int J. Sport Med, Flight 14, Supl.1, p 42-545,199
Some discussions of D. Fearnhead and D. Prangle's Read Paper "Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation"
This report is a collection of comments on the Read Paper of Fearnhead and
Prangle (2011), to appear in the Journal of the Royal Statistical Society
Series B, along with a reply from the authors.Comment: 10 page
Some discussions of D. Fearnhead and D. Prangle's Read Paper "Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation"
This report is a collection of comments on the Read Paper of Fearnhead and Prangle (2011), to appear in the Journal of the Royal Statistical Society Series B, along with a reply from the authors
- …