61 research outputs found

    A comparison of inferential methods for highly non-linear state space models in ecology and epidemiology

    Get PDF
    Highly non-linear, chaotic or near chaotic, dynamic models are important in fields such as ecology and epidemiology: for example, pest species and diseases often display highly non-linear dynamics. However, such models are problematic from the point of view of statistical inference. The defining feature of chaotic and near chaotic systems is extreme sensitivity to small changes in system states and parameters, and this can interfere with inference. There are two main classes of methods for circumventing these difficulties: information reduction approaches, such as Approximate Bayesian Computation or Synthetic Likelihood and state space methods, such as Particle Markov chain Monte Carlo, Iterated Filtering or Parameter Cascading. The purpose of this article is to compare the methods, in order to reach conclusions about how to approach inference with such models in practice. We show that neither class of methods is universally superior to the other. We show that state space methods can suffer multimodality problems in settings with low process noise or model mis-specification, leading to bias toward stable dynamics and high process noise. Information reduction methods avoid this problem but, under the correct model and with sufficient process noise, state space methods lead to substantially sharper inference than information reduction methods. More practically, there are also differences in the tuning requirements of different methods. Our overall conclusion is that model development and checking should probably be performed using an information reduction method with low tuning requirements, while for final inference it is likely to be better to switch to a state space method, checking results against the information reduction approach

    A comparison of 7 random-effects models for meta-analyses that estimate the summary odds ratio

    Get PDF
    Comparative trials that report binary outcome data are commonly pooled in systematic reviews and meta-analyses. This type of data can be presented as a series of 2-by-2 tables. The pooled odds ratio is often presented as the outcome of primary interest in the resulting meta-analysis. We examine the use of 7 models for random-effects meta-analyses that have been proposed for this purpose. The first of these models is the conventional one that uses normal within-study approximations and a 2-stage approach. The other models are generalised linear mixed models that perform the analysis in 1 stage and have the potential to provide more accurate inference. We explore the implications of using these 7 models in the context of a Cochrane Review, and we also perform a simulation study. We conclude that generalised linear mixed models can result in better statistical inference than the conventional 2-stage approach but also that this type of model presents issues and difficulties. These challenges include more demanding numerical methods and determining the best way to model study specific baseline risks. One possible approach for analysts is to specify a primary model prior to performing the systematic review but also to present the results using other models in a sensitivity analysis. Only one of the models that we investigate is found to perform poorly so that any of the other models could be considered for either the primary or the sensitivity analysis

    Vol. 4, No. 2 (Full Issue)

    Get PDF

    A comparison of inferential methods for highly nonlinear state space models in ecology and epidemiology

    Get PDF
    Most of this work was undertaken at the University of Bath, where M.F. was a Ph.D. student, and it was supported in part by EPSRC Grants EP/I000917 and EP/K005251/1.Highly nonlinear, chaotic or near chaotic, dynamic models are important in fields such as ecology and epidemiology: for example, pest species and diseases often display highly nonlinear dynamics. However, such models are problematic from the point of view of statistical inference. The defining feature of chaotic and near chaotic systems is extreme sensitivity to small changes in system states and parameters, and this can interfere with inference. There are twomain classes ofmethods for circumventing these difficulties: information reduction approaches, such as Approximate Bayesian Computation or Synthetic Likelihood, and state space methods, such as Particle Markov chain Monte Carlo, Iterated Filtering or Parameter Cascading. The purpose of this article is to compare the methods in order to reach conclusions about how to approach inference with such models in practice. We show that neither class of methods is universally superior to the other. We show that state space methods can suffer multimodality problems in settings with low process noise or model misspecification, leading to bias toward stable dynamics and high process noise. Information reduction methods avoid this problem, but, under the correct model and with sufficient process noise, state space methods lead to substantially sharper inference than information reduction methods. More practically, there are also differences in the tuning requirements of different methods. Our overall conclusion is that model development and checking should probably be performed using an information reduction method with low tuning requirements, while for final inference it is likely to be better to switch to a state space method, checking results against the information reduction approach.Publisher PDFPeer reviewe

    Vol. 15, No. 1 (Full Issue)

    Get PDF

    Vol. 1, No. 2 (Full Issue)

    Get PDF

    Vol. 13, No. 1 (Full Issue)

    Get PDF

    Vol. 5, No. 2 (Full Issue)

    Get PDF

    Reproducible Aggregation of Sample-Split Statistics

    Full text link
    Statistical inference is often simplified by sample-splitting. This simplification comes at the cost of the introduction of randomness not native to the data. We propose a simple procedure for sequentially aggregating statistics constructed with multiple splits of the same sample. The user specifies a bound and a nominal error rate. If the procedure is implemented twice on the same data, the nominal error rate approximates the chance that the results differ by more than the bound. We analyze the accuracy of the nominal error rate and illustrate the application of the procedure to several widely applied statistical methods
    • …
    corecore