812 research outputs found

    Calibrating Agent-Based Models Using Uncertainty Quantification Methods

    Get PDF
    Agent-based models (ABMs) can be found across a number of diverse application areas ranging from simulating consumer behaviour to infectious disease modelling. Part of their popularity is due to their ability to simulate individual behaviours and decisions over space and time. However, whilst there are plentiful examples within the academic literature, these models are only beginning to make an impact within policy areas. Whilst frameworks such as NetLogo make the creation of ABMs relatively easy, a number of key methodological issues, including the quantification of uncertainty, remain. In this paper we draw on state-of-the-art approaches from the fields of uncertainty quantification and model optimisation to describe a novel framework for the calibration of ABMs using History Matching and Approximate Bayesian Computation. The utility of the framework is demonstrated on three example models of increasing complexity: (i) Sugarscape to illustrate the approach on a toy example; (ii) a model of the movement of birds to explore the efficacy of our framework and compare it to alternative calibration approaches and; (iii) the RISC model of farmer decision making to demonstrate its value in a real application. The results highlight the efficiency and accuracy with which this approach can be used to calibrate ABMs. This method can readily be applied to local or national-scale ABMs, such as those linked to the creation or tailoring of key policy decisions

    Assessment of learning curves in complex surgical interventions: a consecutive case-series study

    Get PDF
    Background: Surgical interventions are complex, which complicates their rigorous assessment through randomised clinical trials. An important component of complexity relates to surgeon experience and the rate at which the required level of skill is achieved, known as the learning curve. There is considerable evidence that operator performance for surgical innovations will change with increasing experience. Such learning effects complicate evaluations; the start of the trial might be delayed, resulting in loss of surgeon equipoise or, if an assessment is undertaken before performance has stabilised, the true impact of the intervention may be distorted. Methods: Formal estimation of learning parameters is necessary to characterise the learning curve, model its evolution and adjust for its presence during assessment. Current methods are either descriptive or model the learning curve through three main features: the initial skill level, the learning rate and the final skill level achieved. We introduce a fourth characterising feature, the duration of the learning period, which provides an estimate of the point at which learning has stabilised. We propose a two-phase model to estimate formally all four learning curve features. Results: We demonstrate that the two-phase model can be used to estimate the end of the learning period by incorporating a parameter for estimating the duration of learning. This is achieved by breaking down the model into a phase describing the learning period and one describing cases after the final skill level is reached, with the break point representing the length of learning. We illustrate the method using cardiac surgery data. Conclusions: This modelling extension is useful as it provides a measure of the potential cost of learning an intervention and enables statisticians to accommodate cases undertaken during the learning phase and assess the intervention after the optimal skill level is reached. The limitations of the method and implications for the optimal timing of a definitive randomised controlled trial are also discussed

    The challenges faced in the design, conduct and analysis of surgical randomised controlled trials

    Get PDF
    Randomised evaluations of surgical interventions are rare; some interventions have been widely adopted without rigorous evaluation. Unlike other medical areas, the randomised controlled trial (RCT) design has not become the default study design for the evaluation of surgical interventions. Surgical trials are difficult to successfully undertake and pose particular practical and methodological challenges. However, RCTs have played a role in the assessment of surgical innovations and there is scope and need for greater use. This article will consider the design, conduct and analysis of an RCT of a surgical intervention. The issues will be reviewed under three headings: the timing of the evaluation, defining the research question and trial design issues. Recommendations on the conduct of future surgical RCTs are made. Collaboration between research and surgical communities is needed to address the distinct issues raised by the assessmentof surgical interventions and enable the conduct of appropriate and well-designed trials.The Health Services Research Unit is funded by the Scottish Government Health DirectoratesPeer reviewedPublisher PD

    Search algorithms as a framework for the optimization of drug combinations

    Get PDF
    Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms, originally developed for digital communication, modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs with only one third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6-9 interventions in 80-90% of tests, compared with 15-30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution.Comment: 36 pages, 10 figures, revised versio

    Holocene deglaciation and glacier readvances on the Fildes Peninsula and King George Island (Isla 25 de Mayo), South Shetland Islands, NW Antarctic Peninsula

    Get PDF
    To provide insights into glacier-climate dynamics of the South Shetland Islands (SSI), NW Antarctic Peninsula, we present a new deglaciation and readvance model for the Bellingshausen Ice Cap (BIC) on Fildes Peninsula and for King George Island/Isla 25 de Mayo (KGI) ~62°S. Deglaciation on KGI began after c. 15 ka cal BP and had progressed to within present-day limits on the Fildes Peninsula, its largest ice-free peninsula, by c. 6.6–5.3 ka cal BP. Probability density phase analysis of chronological data constraining Holocene glacier advances on KGI revealed up to eight 95% probability ‘gaps’ during which readvances could have occurred. These are grouped into four stages – Stage 1: a readvance and marine transgression, well-constrained by field data, between c. 7.4–6.6 ka cal BP; Stage 2: four probability ‘gaps’, less well-constrained by field data, between c. 5.3–2.2 ka cal BP; Stage 3: a well-constrained but restricted ‘readvance’ between c. 1.7–1.5 ka; Stage 4: two further minor ‘readvances’, one less well-constrained by field data between c. 1.3–0.7 ka cal BP (68% probability), and a ‘final’ well-constrained ‘readvance’ after 1950 CE) is associated with recent warming/more positive SAM-like conditions

    Subanesthetic ketamine treatment promotes abnormal interactions between neural subsystems and alters the properties of functional brain networks

    Get PDF
    Acute treatment with subanesthetic ketamine, a non-competitive N-methyl-D-aspartic acid (NMDA) receptor antagonist, is widely utilized as a translational model for schizophrenia. However, how acute NMDA receptor blockade impacts on brain functioning at a systems level, to elicit translationally relevant symptomatology and behavioral deficits, has not yet been determined. Here, for the first time, we apply established and recently validated topological measures from network science to brain imaging data gained from ketamine-treated mice to elucidate how acute NMDA receptor blockade impacts on the properties of functional brain networks. We show that the effects of acute ketamine treatment on the global properties of these networks are divergent from those widely reported in schizophrenia. Where acute NMDA receptor blockade promotes hyperconnectivity in functional brain networks, pronounced dysconnectivity is found in schizophrenia. We also show that acute ketamine treatment increases the connectivity and importance of prefrontal and thalamic brain regions in brain networks, a finding also divergent to alterations seen in schizophrenia. In addition, we characterize how ketamine impacts on bipartite functional interactions between neural subsystems. A key feature includes the enhancement of prefrontal cortex (PFC)-neuromodulatory subsystem connectivity in ketamine-treated animals, a finding consistent with the known effects of ketamine on PFC neurotransmitter levels. Overall, our data suggest that, at a systems level, acute ketamine-induced alterations in brain network connectivity do not parallel those seen in chronic schizophrenia. Hence, the mechanisms through which acute ketamine treatment induces translationally relevant symptomatology may differ from those in chronic schizophrenia. Future effort should therefore be dedicated to resolve the conflicting observations between this putative translational model and schizophrenia
    corecore