87 research outputs found

    A Bayesian assessment of an approximate model for unconfined water flow in sloping layered porous media

    Get PDF
    The prediction of water table height in unconfined layered porous media is a difficult modelling problem that typically requires numerical simulation. This paper proposes an analytical model to approximate the exact solution based on a steady-state Dupuit–Forchheimer analysis. The key contribution in relation to a similar model in the literature relies in the ability of the proposed model to consider more than two layers with different thicknesses and slopes, so that the existing model becomes a special case of the proposed model herein. In addition, a model assessment methodology based on the Bayesian inverse problem is proposed to efficiently identify the values of the physical parameters for which the proposed model is accurate when compared against a reference model given by MODFLOW-NWT, the open-source finite-difference code by the U.S. Geological Survey. Based on numerical results for a representative case study, the ratio of vertical recharge rate to hydraulic conductivity emerges as a key parameter in terms of model accuracy so that, when appropriately bounded, both the proposed model and MODFLOW-NWT provide almost identical results

    Statistical modeling of ground motion relations for seismic hazard analysis

    Full text link
    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area-equivalence; wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp(epsilon_0) of Joyner and Boore 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions. etc

    Introduction:strategy in EU foreign policy

    Get PDF
    The point of departure for the special collection is provided by the observation that the growing complexity of the crises in the neighbourhood and the internal ones faced by the Union provides a sense of urgency to any type of strategic thinking that the EU might embrace. Against this backdrop, the recent shift towards geopolitics and strategic thinking is contextualized and the understanding of key aspects of ways in which the shift is translated into strategies by EU actors is put forward. The analysis recognizes the recent developments within the institutional dimension of EU’s foreign and security policy, yet it confirms the fundamental meaning of the member states’ willingness to invest resources and harmonize their foreign policy strategies at the EU level

    An Expanded Evaluation of Protein Function Prediction Methods Shows an Improvement In Accuracy

    Get PDF
    Background: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging. Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2. Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent

    An expanded evaluation of protein function prediction methods shows an improvement in accuracy

    Get PDF
    Background: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging. Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2. Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent. Keywords: Protein function prediction, Disease gene prioritizationpublishedVersio
    • …
    corecore