10 research outputs found
Discordance between patients ́ and physicians ́ beliefs regarding barriers to HIV treatment initiation
info:eu-repo/semantics/nonPublishe
Reasons for not starting antiretroviral therapy: A multinational survey among patients and their physicians
info:eu-repo/semantics/nonPublishe
Chronic Kidney Disease and Exposure to ART in a large cohort with Long-term Follow-up: The EuroSIDA study
For the EuroSIDA study Groupinfo:eu-repo/semantics/nonPublishe
Insights into the quantification and reporting of model-related uncertainty across different disciplines
Quantifying uncertainty associated with our models is the only way we can express how much we know about any phenomenon. Incomplete consideration of model-based uncertainties can lead to overstated conclusions with real world impacts in diverse spheres, including conservation, epidemiology, climate science, and policy. Despite these potentially damaging consequences, we still know little about how different fields quantify and report uncertainty. We introduce the "sources of uncertainty" framework, using it to conduct a systematic audit of model-related uncertainty quantification from seven scientific fields, spanning the biological, physical, and political sciences. Our interdisciplinary audit shows no field fully considers all possible sources of uncertainty, but each has its own best practices alongside shared outstanding challenges. We make ten easy-to-implement recommendations to improve the consistency, completeness, and clarity of reporting on model-related uncertainty. These recommendations serve as a guide to best practices across scientific fields and expand our toolbox for high-quality research
How is model-related uncertainty quantified and reported in different disciplines?
How do we know how much we know? Quantifying uncertainty associated with our
modelling work is the only way we can answer how much we know about any
phenomenon. With quantitative science now highly influential in the public
sphere and the results from models translating into action, we must support our
conclusions with sufficient rigour to produce useful, reproducible results.
Incomplete consideration of model-based uncertainties can lead to false
conclusions with real world impacts. Despite these potentially damaging
consequences, uncertainty consideration is incomplete both within and across
scientific fields. We take a unique interdisciplinary approach and conduct a
systematic audit of model-related uncertainty quantification from seven
scientific fields, spanning the biological, physical, and social sciences. Our
results show no single field is achieving complete consideration of model
uncertainties, but together we can fill the gaps. We propose opportunities to
improve the quantification of uncertainty through use of a source framework for
uncertainty consideration, model type specific guidelines, improved
presentation, and shared best practice. We also identify shared outstanding
challenges (uncertainty in input data, balancing trade-offs, error propagation,
and defining how much uncertainty is required). Finally, we make nine concrete
recommendations for current practice (following good practice guidelines and an
uncertainty checklist, presenting uncertainty numerically, and propagating
model-related uncertainty into conclusions), future research priorities
(uncertainty in input data, quantifying uncertainty in complex models, and the
importance of missing uncertainty in different contexts), and general research
standards across the sciences (transparency about study limitations and
dedicated uncertainty sections of manuscripts).Comment: 40 Pages (including supporting information), 3 Figures, 2 Boxes, 1
Tabl