82 research outputs found

    A comparison of two methods for expert elicitation in health technology assessments.

    Get PDF
    BACKGROUND: When data needed to inform parameters in decision models are lacking, formal elicitation of expert judgement can be used to characterise parameter uncertainty. Although numerous methods for eliciting expert opinion as probability distributions exist, there is little research to suggest whether one method is more useful than any other method. This study had three objectives: (i) to obtain subjective probability distributions characterising parameter uncertainty in the context of a health technology assessment; (ii) to compare two elicitation methods by eliciting the same parameters in different ways; (iii) to collect subjective preferences of the experts for the different elicitation methods used. METHODS: Twenty-seven clinical experts were invited to participate in an elicitation exercise to inform a published model-based cost-effectiveness analysis of alternative treatments for prostate cancer. Participants were individually asked to express their judgements as probability distributions using two different methods - the histogram and hybrid elicitation methods - presented in a random order. Individual distributions were mathematically aggregated across experts with and without weighting. The resulting combined distributions were used in the probabilistic analysis of the decision model and mean incremental cost-effectiveness ratios and the expected values of perfect information (EVPI) were calculated for each method, and compared with the original cost-effectiveness analysis. Scores on the ease of use of the two methods and the extent to which the probability distributions obtained from each method accurately reflected the expert's opinion were also recorded. RESULTS: Six experts completed the task. Mean ICERs from the probabilistic analysis ranged between £162,600-£175,500 per quality-adjusted life year (QALY) depending on the elicitation and weighting methods used. Compared to having no information, use of expert opinion decreased decision uncertainty: the EVPI value at the £30,000 per QALY threshold decreased by 74-86 % from the original cost-effectiveness analysis. Experts indicated that the histogram method was easier to use, but attributed a perception of more accuracy to the hybrid method. CONCLUSIONS: Inclusion of expert elicitation can decrease decision uncertainty. Here, choice of method did not affect the overall cost-effectiveness conclusions, but researchers intending to use expert elicitation need to be aware of the impact different methods could have.This paper presents independent research funded by the National Institute of Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for the South West Peninsula

    Genes of intestinal Escherichia coli and their relation to the inflammatory activity in patients with ulcerative colitis and Crohn’s disease

    Get PDF
    Escherichia coli gene fimA was the most frequent gene that occurred in the intestine of all investigated groups. All subjects with fimA gene had significantly higher values of tumor necrosis factor alpha (TNF-α) and CRP than those with other E. coli genes. There was also a tendency to increased serum interleukin (IL)-6 levels in patients carrying the fimA gene; however, no relation was observed to serum IL-8 and IL-10. Patients with Crohn’s disease had significantly higher IL-6 than those with ulcerative colitis (UC) and controls. The highest levels of TNF-α were detected in the UC group. There were no significant differences in serum IL-8 and IL-10 between all three groups. The presence of E. coli gene fimA in the large bowel of patients with IBD is related to the immunological activity of the disease which may be important from the aspect of therapeutical strategy

    A comparative analysis of multi-level computer-assisted decision making systems for traumatic injuries

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper focuses on the creation of a predictive computer-assisted decision making system for traumatic injury using machine learning algorithms. Trauma experts must make several difficult decisions based on a large number of patient attributes, usually in a short period of time. The aim is to compare the existing machine learning methods available for medical informatics, and develop reliable, rule-based computer-assisted decision-making systems that provide recommendations for the course of treatment for new patients, based on previously seen cases in trauma databases. Datasets of traumatic brain injury (TBI) patients are used to train and test the decision making algorithm. The work is also applicable to patients with traumatic pelvic injuries.</p> <p>Methods</p> <p>Decision-making rules are created by processing patterns discovered in the datasets, using machine learning techniques. More specifically, CART and C4.5 are used, as they provide grammatical expressions of knowledge extracted by applying logical operations to the available features. The resulting rule sets are tested against other machine learning methods, including AdaBoost and SVM. The rule creation algorithm is applied to multiple datasets, both with and without prior filtering to discover significant variables. This filtering is performed via logistic regression prior to the rule discovery process.</p> <p>Results</p> <p>For survival prediction using all variables, CART outperformed the other machine learning methods. When using only significant variables, neural networks performed best. A reliable rule-base was generated using combined C4.5/CART. The average predictive rule performance was 82% when using all variables, and approximately 84% when using significant variables only. The average performance of the combined C4.5 and CART system using significant variables was 89.7% in predicting the exact outcome (home or rehabilitation), and 93.1% in predicting the ICU length of stay for airlifted TBI patients.</p> <p>Conclusion</p> <p>This study creates an efficient computer-aided rule-based system that can be employed in decision making in TBI cases. The rule-bases apply methods that combine CART and C4.5 with logistic regression to improve rule performance and quality. For final outcome prediction for TBI cases, the resulting rule-bases outperform systems that utilize all available variables.</p

    Understanding Uncertainties in Model-Based Predictions of Aedes aegypti Population Dynamics

    Get PDF
    Dengue is one of the most important insect-vectored human viral diseases. The principal vector is Aedes aegypti, a mosquito that lives in close association with humans. Currently, there is no effective vaccine available and the only means for limiting dengue outbreaks is vector control. To help design vector control strategies, spatial models of Ae. aegypti population dynamics have been developed. However, the usefulness of such models depends on the reliability of their predictions, which can be affected by different sources of uncertainty including uncertainty in the model parameter estimation, uncertainty in the model structure, measurement errors in the data fed into the model, individual variability, and stochasticity in the environment. This study quantifies uncertainties in the mosquito population dynamics predicted by Skeeter Buster, a spatial model of Ae. aegypti, for the city of Iquitos, Peru. The uncertainty quantification should enable us to better understand the reliability of model predictions, improve Skeeter Buster and other similar models by targeting those parameters with high uncertainty contributions for further empirical research, and thereby decrease uncertainty in model predictions

    Potential range of impact of an ecological trap network: the case of timber stacks and the Rosalia longicorn

    Get PDF
    Although the negative impact of timber stacks on populations of saproxylic beetles is a well-known phenomenon, there is relatively little data concerning the scale of this impact and its spatial aspect. Beech timber stored in the vicinity of the forest can act as an ecological trap for the Rosalia longicorn (Rosalia alpina), so in this study we have attempted to determine the spatial range of the impact of a network of timber stacks. Timber stacks in the species’ range in the study area were listed and monitored during the adult emergence period in 2014–2016. Based on published data relating to the species’ dispersal capabilities, buffers of four radii (500, 1000, 1600, 3000 m) were delineated around the stacks and the calculated ranges of potential impact. The results show that the percentage of currently known localities of the Rosalia longicorn impacted by stacks varies from 19.7 to 81.6%, depending on the assumed impact radius. The percentage of forest influenced by timber stacks was 77% for the largest-radius buffer. The overall impact of the ecological trap network is accelerated by fragmentation of the impact-free area. It was also found that forests situated close to the timber stacks where the Rosalia longicorn was recorded were older and more homogeneous in age and species composition than those around stacks where the species was absent. Such results suggest that timber stacks act as an ecological trap in the source area of the local population

    Data Descriptor: A global multiproxy database for temperature reconstructions of the Common Era

    Get PDF
    Reproducible climate reconstructions of the Common Era (1 CE to present) are key to placing industrial-era warming into the context of natural climatic variability. Here we present a community-sourced database of temperature-sensitive proxy records from the PAGES2k initiative. The database gathers 692 records from 648 locations, including all continental regions and major ocean basins. The records are from trees, ice, sediment, corals, speleothems, documentary evidence, and other archives. They range in length from 50 to 2000 years, with a median of 547 years, while temporal resolution ranges from biweekly to centennial. Nearly half of the proxy time series are significantly correlated with HadCRUT4.2 surface temperature over the period 1850-2014. Global temperature composites show a remarkable degree of coherence between high-and low-resolution archives, with broadly similar patterns across archive types, terrestrial versus marine locations, and screening criteria. The database is suited to investigations of global and regional temperature variability over the Common Era, and is shared in the Linked Paleo Data (LiPD) format, including serializations in Matlab, R and Python.(TABLE)Since the pioneering work of D'Arrigo and Jacoby1-3, as well as Mann et al. 4,5, temperature reconstructions of the Common Era have become a key component of climate assessments6-9. Such reconstructions depend strongly on the composition of the underlying network of climate proxies10, and it is therefore critical for the climate community to have access to a community-vetted, quality-controlled database of temperature-sensitive records stored in a self-describing format. The Past Global Changes (PAGES) 2k consortium, a self-organized, international group of experts, recently assembled such a database, and used it to reconstruct surface temperature over continental-scale regions11 (hereafter, ` PAGES2k-2013').This data descriptor presents version 2.0.0 of the PAGES2k proxy temperature database (Data Citation 1). It augments the PAGES2k-2013 collection of terrestrial records with marine records assembled by the Ocean2k working group at centennial12 and annual13 time scales. In addition to these previously published data compilations, this version includes substantially more records, extensive new metadata, and validation. Furthermore, the selection criteria for records included in this version are applied more uniformly and transparently across regions, resulting in a more cohesive data product.This data descriptor describes the contents of the database, the criteria for inclusion, and quantifies the relation of each record with instrumental temperature. In addition, the paleotemperature time series are summarized as composites to highlight the most salient decadal-to centennial-scale behaviour of the dataset and check mutual consistency between paleoclimate archives. We provide extensive Matlab code to probe the database-processing, filtering and aggregating it in various ways to investigate temperature variability over the Common Era. The unique approach to data stewardship and code-sharing employed here is designed to enable an unprecedented scale of investigation of the temperature history of the Common Era, by the scientific community and citizen-scientists alike

    Application and evaluation of classification trees for screening unwanted plants

    No full text
    Risk assessment systems for introduced species are being developed and applied globally, but methods for rigorously evaluating them are still in their infancy. We explore classification and regression tree models as an alternative to the current Australian Weed Risk Assessment system, and demonstrate how the performance of screening tests for unwanted alien species may be quantitatively compared using receiver operating characteristic (ROC) curve analysis. The optimal classification tree model for predicting weediness included just four out of a possible 44 attributes of introduced plants examined, namely: (i) intentional human dispersal of propagules; (ii) evidence of naturalization beyond native range; (iii) evidence of being a weed elsewhere; and (iv) a high level of domestication. Intentional human dispersal of propagules in combination with evidence of naturalization beyond a plants native range led to the strongest prediction of weediness. A high level of domestication in combination with no evidence of naturalization mitigated the likelihood of an introduced plant becoming a weed resulting from intentional human dispersal of propagules. Unlikely intentional human dispersal of propagules combined with no evidence of being a weed elsewhere led to the lowest predicted probability of weediness. The failure to include intrinsic plant attributes in the model suggests that either these attributes are not useful general predictors of weediness, or data and analysis were inadequate to elucidate the underlying relationship(s). This concurs with the historical pessimism that we will ever be able to accurately predict invasive plants. Given the apparent importance of propagule pressure (the number of individuals of an species released), future attempts at evaluating screening model performance for identifying unwanted plants need to account for propagule pressure when collating and/or analysing datasets. The classification tree had a cross-validated sensitivity of 93.6% and specificity of 36.7%. Based on the area under the ROC curve, the performance of the classification tree in correctly classifying plants as weeds or non-weeds was slightly inferior (Area under ROC curve = 0.83 +/- 0.021 (+/- SE)) to that of the current risk assessment system in use (Area under ROC curve = 0.89 +/- 0.018 (+/- SE)), although requires many fewer questions to be answered
    • 

    corecore