150 research outputs found

    Bayesian and Markov chain Monte Carlo methods for identifying nonlinear systems in the presence of uncertainty

    Get PDF
    In this paper, the authors outline the general principles behind an approach to Bayesian system identification and highlight the benefits of adopting a Bayesian framework when attempting to identify models of nonlinear dynamical systems in the presence of uncertainty. It is then described how, through a summary of some key algorithms, many of the potential difficulties associated with a Bayesian approach can be overcome through the use of Markov chain Monte Carlo (MCMC) methods. The paper concludes with a case study, where an MCMC algorithm is used to facilitate the Bayesian system identification of a nonlinear dynamical system from experimentally observed acceleration time histories

    The Communicability of Graphical Alternatives to Tabular Displays of Statistical Simulation Studies

    Get PDF
    Simulation studies are often used to assess the frequency properties and optimality of statistical methods. They are typically reported in tables, which may contain hundreds of figures to be contrasted over multiple dimensions. To assess the degree to which these tables are fit for purpose, we performed a randomised cross-over experiment in which statisticians were asked to extract information from (i) such a table sourced from the literature and (ii) a graphical adaptation designed by the authors, and were timed and assessed for accuracy. We developed hierarchical models accounting for differences between individuals of different experience levels (under- and post-graduate), within experience levels, and between different table-graph pairs. In our experiment, information could be extracted quicker and, for less experienced participants, more accurately from graphical presentations than tabular displays. We also performed a literature review to assess the prevalence of hard-to-interpret design features in tables of simulation studies in three popular statistics journals, finding that many are presented innumerately. We recommend simulation studies be presented in graphical form

    Extensions to the Visual Predictive Check to facilitate model performance evaluation

    Get PDF
    The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example

    Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems

    Get PDF
    Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change

    New approaches to measuring anthelminthic drug efficacy: parasitological responses of childhood schistosome infections to treatment with praziquantel

    Get PDF
    By 2020, the global health community aims to control and eliminate human helminthiases, including schistosomiasis in selected African countries, principally by preventive chemotherapy (PCT) through mass drug administration (MDA) of anthelminthics. Quantitative monitoring of anthelminthic responses is crucial for promptly detecting changes in efficacy, potentially indicative of emerging drug resistance. Statistical models offer a powerful means to delineate and compare efficacy among individuals, among groups of individuals and among populations.; We illustrate a variety of statistical frameworks that offer different levels of inference by analysing data from nine previous studies on egg counts collected from African children before and after administration of praziquantel.; We quantify responses to praziquantel as egg reduction rates (ERRs), using different frameworks to estimate ERRs among population strata, as average responses, and within strata, as individual responses. We compare our model-based average ERRs to corresponding model-free estimates, using as reference the World Health Organization (WHO) 90 % threshold of optimal efficacy. We estimate distributions of individual responses and summarize the variation among these responses as the fraction of ERRs falling below the WHO threshold.; Generic models for evaluating responses to anthelminthics deepen our understanding of variation among populations, sub-populations and individuals. We discuss the future application of statistical modelling approaches for monitoring and evaluation of PCT programmes targeting human helminthiases in the context of the WHO 2020 control and elimination goals

    iQuantitator: A tool for protein expression inference using iTRAQ

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Isobaric Tags for Relative and Absolute Quantitation (iTRAQ™) [Applied Biosystems] have seen increased application in differential protein expression analysis. To facilitate the growing need to analyze iTRAQ data, especially for cases involving multiple iTRAQ experiments, we have developed a modeling approach, statistical methods, and tools for estimating the relative changes in protein expression under various treatments and experimental conditions.</p> <p>Results</p> <p>This modeling approach provides a unified analysis of data from multiple iTRAQ experiments and links the observed quantity (reporter ion peak area) to the experiment design and the calculated quantity of interest (treatment-dependent protein and peptide fold change) through an additive model under log transformation. Others have demonstrated, through a case study, this modeling approach and noted the computational challenges of parameter inference in the unbalanced data set typical of multiple iTRAQ experiments. Here we present the development of an inference approach, based on hierarchical regression with batching of regression coefficients and Markov Chain Monte Carlo (MCMC) methods that overcomes some of these challenges. In addition to our discussion of the underlying method, we also present our implementation of the software, simulation results, experimental results, and sample output from the resulting analysis report.</p> <p>Conclusion</p> <p>iQuantitator's process-based modeling approach overcomes limitations in current methods and allows for application in a variety of experimental designs. Additionally, hypertext-linked documents produced by the tool aid in the interpretation and exploration of results.</p

    Merging Resource Availability with Isotope Mixing Models: The Role of Neutral Interaction Assumptions

    Get PDF
    Background: Bayesian mixing models have allowed for the inclusion of uncertainty and prior information in the analysis of trophic interactions using stable isotopes. Formulating prior distributions is relatively straightforward when incorporating dietary data. However, the use of data that are related, but not directly proportional, to diet (such as prey availability data) is often problematic because such information is not necessarily predictive of diet, and the information required to build a reliable prior distribution for all prey species is often unavailable. Omitting prey availability data impacts the estimation of a predator's diet and introduces the strong assumption of consumer ultrageneralism (where all prey are consumed in equal proportions), particularly when multiple prey have similar isotope values. Methodology: We develop a procedure to incorporate prey availability data into Bayesian mixing models conditional on the similarity of isotope values between two prey. If a pair of prey have similar isotope values (resulting in highly uncertain mixing model results), our model increases the weight of availability data in estimating the contribution of prey to a predator's diet. We test the utility of this method in an intertidal community against independently measured feeding rates. Conclusions: Our results indicate that our weighting procedure increases the accuracy by which consumer diets can be inferred in situations where multiple prey have similar isotope values. This suggests that the exchange of formalism for predictive power is merited, particularly when the relationship between prey availability and a predator's diet cannot be assumed for all species in a system.National Science Foundation (NSF) [DEB-0608178]U.S. Environmental Protection AgencyDepartment of EducationSigma XiUniversity of ChicagoFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)(CAPES) Coordenacao de Aperfeicoamento de Pessoal de Nivel Superiori

    Projection of the year 2050 burden of diabetes in the US adult population: dynamic modeling of incidence, mortality, and prediabetes prevalence

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>People with diabetes can suffer from diverse complications that seriously erode quality of life. Diabetes, costing the United States more than $174 billion per year in 2007, is expected to take an increasingly large financial toll in subsequent years. Accurate projections of diabetes burden are essential to policymakers planning for future health care needs and costs.</p> <p>Methods</p> <p>Using data on prediabetes and diabetes prevalence in the United States, forecasted incidence, and current US Census projections of mortality and migration, the authors constructed a series of dynamic models employing systems of difference equations to project the future burden of diabetes among US adults. A three-state model partitions the US population into no diabetes, undiagnosed diabetes, and diagnosed diabetes. A four-state model divides the state of "no diabetes" into high-risk (prediabetes) and low-risk (normal glucose) states. A five-state model incorporates an intervention designed to prevent or delay diabetes in adults at high risk.</p> <p>Results</p> <p>The authors project that annual diagnosed diabetes incidence (new cases) will increase from about 8 cases per 1,000 in 2008 to about 15 in 2050. Assuming low incidence and relatively high diabetes mortality, total diabetes prevalence (diagnosed and undiagnosed cases) is projected to increase from 14% in 2010 to 21% of the US adult population by 2050. However, if recent increases in diabetes incidence continue and diabetes mortality is relatively low, prevalence will increase to 33% by 2050. A middle-ground scenario projects a prevalence of 25% to 28% by 2050. Intervention can reduce, but not eliminate, increases in diabetes prevalence.</p> <p>Conclusions</p> <p>These projected increases are largely attributable to the aging of the US population, increasing numbers of members of higher-risk minority groups in the population, and people with diabetes living longer. Effective strategies will need to be undertaken to moderate the impact of these factors on national diabetes burden. Our analysis suggests that widespread implementation of reasonably effective preventive interventions focused on high-risk subgroups of the population can considerably reduce, but not eliminate, future increases in diabetes prevalence.</p

    A hierarchical Bayesian model for understanding the spatiotemporal dynamics of the intestinal epithelium

    Get PDF
    Our work addresses two key challenges, one biological and one methodological. First, we aim to understand how proliferation and cell migration rates in the intestinal epithelium are related under healthy, damaged (Ara-C treated) and recovering conditions, and how these relations can be used to identify mechanisms of repair and regeneration. We analyse new data, presented in more detail in a companion paper, in which BrdU/IdU cell-labelling experiments were performed under these respective conditions. Second, in considering how to more rigorously process these data and interpret them using mathematical models, we use a probabilistic, hierarchical approach. This provides a best-practice approach for systematically modelling and understanding the uncertainties that can otherwise undermine the generation of reliable conclusions-uncertainties in experimental measurement and treatment, difficult-to-compare mathematical models of underlying mechanisms, and unknown or unobserved parameters. Both spatially discrete and continuous mechanistic models are considered and related via hierarchical conditional probability assumptions. We perform model checks on both in-sample and out-of-sample datasets and use them to show how to test possible model improvements and assess the robustness of our conclusions. We conclude, for the present set of experiments, that a primarily proliferation-driven model suffices to predict labelled cell dynamics over most time-scales

    Optimal Compensation for Temporal Uncertainty in Movement Planning

    Get PDF
    Motor control requires the generation of a precise temporal sequence of control signals sent to the skeletal musculature. We describe an experiment that, for good performance, requires human subjects to plan movements taking into account uncertainty in their movement duration and the increase in that uncertainty with increasing movement duration. We do this by rewarding movements performed within a specified time window, and penalizing slower movements in some conditions and faster movements in others. Our results indicate that subjects compensated for their natural duration-dependent temporal uncertainty as well as an overall increase in temporal uncertainty that was imposed experimentally. Their compensation for temporal uncertainty, both the natural duration-dependent and imposed overall components, was nearly optimal in the sense of maximizing expected gain in the task. The motor system is able to model its temporal uncertainty and compensate for that uncertainty so as to optimize the consequences of movement
    corecore