72 research outputs found

    Managing uncertainty in modelling of wicked problems: theory and application to Sustainable Aquifer Yield

    No full text
    This thesis presents two approaches to help manage uncertainty in modelling for the resolution of wicked problems , which have no clear problem definition, solution or measure of success. It focuses on Sustainable Aquifer Yield (SAY) as an example. SAY is defined as the pumping volume obtained by a management plan that is expected to satisfy objectives under future conditions within a groundwater system. Integrated modelling can help express, systematise and use knowledge of relevant behaviour of the system, while engaging diverse stakeholders and addressing their interests. Uncertainty is however a key and multifaceted issue when dealing with wicked problems. While many modelling methods exist to help address this uncertainty, there is a need for modellers to be able to integrate these methods purposefully for an applied problem. The research presented involved iteratively proposing two approaches to manage uncertainties in integrated modelling that supports decision making, and exploring the value of each approach by applying it to case studies. For each approach, the applications specifically a) address a technical problem, b) push boundaries on how the problem is viewed, specifically identifying hitherto neglected aspects, and c) address a context where accounting for contested views and surprise is imperative. This research process is described in terms of Critical Systems Practice and resulted in a compilation of linked publications. The first approach proposed is an Uncertainty Management Framework that can be used to help audit the treatment of uncertainty in a step-wise description of an analysis (e.g. evaluating a management plan). The framework provides a formal structure for managing uncertainty by incorporating an uncertainty typology and a set of fundamental uncertainty management actions, but may be too restrictive and demanding for some contexts. To address these limitations, a complementary second approach, designated Iterative Closed Question Modelling, addresses uncertainty by constructing models to test whether each possible answer to a closed question is plausible. The question, assumptions about plausibility and the process of constructing models are all considered uncertain and therefore themselves iteratively critiqued. This approach is formalised in terms of Boundary Critique such that it provides a philosophical foundation justifying the use of a broad range of methods to manage uncertainty in predictive modelling. The thesis concludes that uncertainty needs to be embraced as a natural part of researchers, policy makers and community coming to grips with an evolving situation, rather than being an obstacle to be eliminated. Training of modellers to manage uncertainty needs to specifically address: identification of model scenarios that contradict dominant conclusions; critique of model assumptions and questions from multiple stakeholders’ points of view; and negotiation of the modeller’s role in anticipating surprise (e.g. through understanding consequences of error, design of monitoring, contingency planning and adaptive management). The resulting emphasis on critical thinking about alternative models helps to remind the user that modelling is not a magic trick for seeing the future, but a structured way to reason about both what we do and do not know

    A framework for characterising and evaluating the effectiveness of environmental modelling

    Get PDF
    Environmental modelling is transitioning from the traditional paradigm that focuses on the model and its quantitative performance to a more holistic paradigm that recognises successful model-based outcomes are closely tied to undertaking modelling as a social process, not just as a technical procedure. This paper redefines evaluation as a multi-dimensional and multi-perspective concept, and proposes a more complete framework for identifying and measuring the effectiveness of modelling that serves the new paradigm. Under this framework, evaluation considers a broader set of success criteria, and emphasises the importance of contextual factors in determining the relevance and outcome of the criteria. These evaluation criteria are grouped into eight categories: project efficiency, model accessibility, credibility, saliency, legitimacy, satisfaction, application, and impact. Evaluation should be part of an iterative and adaptive process that attempts to improve model-based outcomes and foster pathways to better futures

    Effective modeling for integrated water resource management: a guide to contextual practices by phases and steps and future opportunities

    Get PDF
    The effectiveness of Integrated Water Resource Management (IWRM) modeling hinges on the quality of practices employed through the process, starting from early problem definition all the way through to using the model in a way that serves its intended purpose. The adoption and implementation of effective modeling practices need to be guided by a practical understanding of the variety of decisions that modelers make, and the information considered in making these choices. There is still limited documented knowledge on the modeling workflow, and the role of contextual factors in determining this workflow and which practices to employ. This paper attempts to contribute to this knowledge gap by providing systematic guidance of the modeling practices through the phases (Planning, Development, Application, and Perpetuation) and steps that comprise the modeling process, positing questions that should be addressed. Practice-focused guidance helps explain the detailed process of conducting IWRM modeling, including the role of contextual factors in shaping practices. We draw on findings from literature and the authors’ collective experience to articulate what and how contextual factors play out in employing those practices. In order to accelerate our learning about how to improve IWRM modeling, the paper concludes with five key areas for future practice-related research: knowledge sharing, overcoming data limitations, informed stakeholder involvement, social equity and uncertainty management. © 2019 Elsevier Lt

    The Allelic Landscape of Human Blood Cell Trait Variation and Links to Common Complex Disease

    Get PDF
    Many common variants have been associated with hematological traits, but identification of causal genes and pathways has proven challenging. We performed a genome-wide association analysis in the UK Biobank and INTERVAL studies, testing 29.5 million genetic variants for association with 36 red cell, white cell, and platelet properties in 173,480 European-ancestry participants. This effort yielded hundreds of low frequency (<5%) and rare (<1%) variants with a strong impact on blood cell phenotypes. Our data highlight general properties of the allelic architecture of complex traits, including the proportion of the heritable component of each blood trait explained by the polygenic signal across different genome regulatory domains. Finally, through Mendelian randomization, we provide evidence of shared genetic pathways linking blood cell indices with complex pathologies, including autoimmune diseases, schizophrenia, and coronary heart disease and evidence suggesting previously reported population associations between blood cell indices and cardiovascular disease may be non-causal.We thank members of the Cambridge BioResource Scientific Advisory Board and Management Committee for their support of our study and the National Institute for Health Research Cambridge Biomedical Research Centre for funding. K.D. is funded as a HSST trainee by NHS Health Education England. M.F. is funded from the BLUEPRINT Grant Code HEALTH-F5-2011-282510 and the BHF Cambridge Centre of Excellence [RE/13/6/30180]. J.R.S. is funded by a MRC CASE Industrial studentship, co-funded by Pfizer. J.D. is a British Heart Foundation Professor, European Research Council Senior Investigator, and National Institute for Health Research (NIHR) Senior Investigator. S.M., S.T, M.H, K.M. and L.D. are supported by the NIHR BioResource-Rare Diseases, which is funded by NIHR. Research in the Ouwehand laboratory is supported by program grants from the NIHR to W.H.O., the European Commission (HEALTH-F2-2012-279233), the British Heart Foundation (BHF) to W.J.A. and D.R. under numbers RP-PG-0310-1002 and RG/09/12/28096 and Bristol Myers-Squibb; the laboratory also receives funding from NHSBT. W.H.O is a NIHR Senior Investigator. The INTERVAL academic coordinating centre receives core support from the UK Medical Research Council (G0800270), the BHF (SP/09/002), the NIHR and Cambridge Biomedical Research Centre, as well as grants from the European Research Council (268834), the European Commission Framework Programme 7 (HEALTH-F2-2012-279233), Merck and Pfizer. DJR and DA were supported by the NIHR Programme ‘Erythropoiesis in Health and Disease’ (Ref. NIHR-RP-PG-0310-1004). N.S. is supported by the Wellcome Trust (Grant Codes WT098051 and WT091310), the EU FP7 (EPIGENESYS Grant Code 257082 and BLUEPRINT Grant Code HEALTH-F5-2011-282510). The INTERVAL study is funded by NHSBT and has been supported by the NIHR-BTRU in Donor Health and Genomics at the University of Cambridge in partnership with NHSBT. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the Department of Health of England or NHSBT. D.G. is supported by a “la Caixa”-Severo Ochoa pre-doctoral fellowship

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Gridded global datasets for Gross Domestic Product and Human Development Index over 1990-2015

    No full text
    An increasing amount of high-resolution global spatial data are available, and used for various assessments. However, key economic and human development indicators are still mainly provided only at national level, and downscaled by users for gridded spatial analyses. Instead, it would be beneficial to adopt data for sub-national administrative units where available, supplemented by national data where necessary. To this end, we present gap-filled multiannual datasets in gridded form for Gross Domestic Product (GDP) and Human Development Index (HDI). To provide a consistent product over time and space, the sub-national data were only used indirectly, scaling the reported national value and thus, remaining representative of the official statistics. This resulted in annual gridded datasets for GDP per capita (PPP), total GDP (PPP), and HDI, for the whole world at 5 arc-min resolution for the 25-year period of 1990-2015. Additionally, total GDP (PPP) is provided with 30 arc-sec resolution for three time steps (1990, 2000, 2015).The Author(s) 2018.Peer reviewe

    Communicating uncertainty: design patterns for framing model results in scientific publications

    No full text
    Uncertainty is a prominent issue in modelling. We learn early in our studies that “all models are wrong, but some are useful.” We also learn accompanying techniques for quantifying performance, and methods for addressing uncertainty within our analyses. When it comes to publishing our results, communicating uncertainty appears to be part of the craft side of modelling, one that we learn best by experience. Sooner or later, we discover that reviewers (and the reader) are willing to accept limitations of our modelling if we use certain key phrases (e.g. “left to future work”) or subtly change our wording (e.g. “seems to indicate” vs. “proves”). Our writing effectively frames the model results, implicitly conveying the author’s judgement about model uncertainty, confidence about results and shaping the reader’s expectations of how the model may be wrong and how it is still useful. While it does not appear to have been broached in the literature on uncertainty in modelling, the framing of model results appears to be one of the primary means by which modellers have addressed uncertainty, and specifically communication of uncertainty, within scientific publications. It is one of the core practices that new modellers need to learn to ensure that their model-based analyses are considered to be credible and useful. Unfortunately, this practice cannot be easily distilled into an algorithm, method or recipe. As with other aspects of the ‘art’ of modelling, there does however appear to be some knowledge that should ideally be transferabl

    The use of food imports to overcome local limits to growth

    Get PDF
    There is a fundamental tension between population growth and carrying capacity, i.e., the population that could potentially be supported using the resources and technologies available at a given time. When population growth outpaces improvements in food production locally, food imports can avoid local limits and allow growth to continue. This import strategy is central to the debate on food security with continuing rapid growth of the world population. This highlights the importance of a quantitative global understanding of where the strategy is implemented, whether it has been successful, and what drivers are involved. We present an integrated quantitative analysis to answer these questions at sub-national and national scale for 1961–2009, focusing on water as the key limiting resource and accounting for resource and technology impacts on local carrying capacity. According to the sub-national estimates, food imports have nearly universally been used to overcome local limits to growth, affecting 3.0 billion people—81% of the population that is approaching or already exceeded local carrying capacity. This strategy is successful in 88% of the cases, being highly dependent on economic purchasing power. In the unsuccessful cases, increases in imports and local productivity have not kept pace with population growth, leaving 460 million people with insufficient food. Where the strategy has been successful, food security of 1.4 billion people has become dependent on imports. Whether or not this dependence on imports is considered desirable, it has policy implications that need to be taken into account.Peer reviewe

    From ad-hoc modelling to strategic infrastructure : A manifesto for model management

    No full text
    Models are playing an increasingly prominent role in watershed management, and environmental management more generally. To successfully utilize model-based tools for governing water resources, modelling timelines must match decision making timelines, and modelling costs must fall within budget constraints. Clarity on management options for modelling processes, and effective strategies, are likey to improve outcomes. This paper provides a first conceptualisation of model management and lays out its scope. We define management of numerical models (MNM) as governance, operational support, and administration of modelling, and argue that it is a universal activity that is crucial but often overlooked in organizations that rely on modelling. The paper lays out the leverage points available to a model manager, based on a review of model management practices in several fields, highlights lessons learned, and opportunities for further improvement as model management becomes a mainstream concern in both research and practice.Peer reviewe
    corecore