54 research outputs found

    Main outcomes of the Phebus FPT1 uncertainty and sensitivity analysis in the EU-MUSA project

    Get PDF
    The Management and Uncertainties of Severe Accidents (MUSA) project was funded in HORIZON 2020 and is coordinated by CIEMAT (Spain). The project aims at consolidating a harmonized approach for the analysis of uncertainties and sensitivities associated with Severe Accidents (SAs) analysis, focusing on source term figures of merit. The Application of Uncertainty Quantification (UQ) Methods against Integral Experiments (AUQMIE – Work Package 4 (WP4)), led by ENEA (Italy), was devoted to apply and test UQ methodologies adopting the internationally recognized PHEBUS FPT1 test. FPT1 was chosen to test UQ methodologies because, even though it is a simplified SA scenario, it was representative of the in-vessel phase of a severe accident initiated by a break in the cold leg of a PWR primary circuit. WP4 served as a platform to identify and discuss the issues encountered in the application of UQ methodol ogies to SA analyses (e.g. discuss the UQ methodology, perform the coupling between the SA codes and the UQ tools, define the results post-processing methods, etc.). The purpose of this paper is to describe the MUSA PHEBUS FPT1 uncertainty application exercise with the related specifications and the methodologies used by the partners to perform the UQ exercise. The main outcomes and lessons learned of the analysis are: scripting was in general needed for the SA code and uncertainty tool coupling and to have more flexibility; particular attention should be devoted to the proper choice of the input uncertain parameters; outlier values of figures of merit should be carefully analyzed; the computational time is a key element to perform UQ in SA; the large number of uncertain input parameters may complicate the interpretation of correlation or sensitivity analysis; there is the need for a statistically solid handling of failed calculations

    First outcomes from the PHEBUS FPT1 uncertainty application done in the EU MUSA project

    Get PDF
    The Management and Uncertainties of Severe Accidents (MUSA) project, founded in HORIZON 2020 and coordinated by CIEMAT (Spain), aims to consolidate a harmonized approach for the analysis of uncertainties and sensitivities associated with Severe Accidents (SAs) by focusing on Source Term (ST) Figure of Merits (FOM). In this framework, among the 7 MUSA WPs the Application of Uncertainty Quantification (UQ) Methods against Integral Experiments (AUQMIE – Work Package 4 (WP4)), led by ENEA (Italy), looked at applying and testing UQ methodologies, against the internationally recognized PHEBUS FPT1 test. Considering that FPT1 is a simplified but representative SA scenario, the main target of the WP4 is to train project partners to perform UQ for SA analyses. WP4 is also a collaborative platform for highlighting and discussing results and issues arising from the application of UQ methodologies, already used for design basis accidents, and in MUSA for SA analyses. As a consequence, WP4 application creates the technical background useful for the full plant and spent fuel pool applications planned along the MUSA project, and it also gives a first contribution for MUSA best practices and lessons learned. 16 partners from different world regions are involved in the WP4 activities. The purpose of this paper is to describe the MUSA PHEBUS FPT1 uncertainty application exercise, the methodologies used by the partners to perform the UQ exercise, and the first insights coming out from the calculation phase

    Sample Reproducibility of Genetic Association Using Different Multimarker TDTs in Genome-Wide Association Studies: Characterization and a New Approach

    Get PDF
    Multimarker Transmission/Disequilibrium Tests (TDTs) are very robust association tests to population admixture and structure which may be used to identify susceptibility loci in genome-wide association studies. Multimarker TDTs using several markers may increase power by capturing high-degree associations. However, there is also a risk of spurious associations and power reduction due to the increase in degrees of freedom. In this study we show that associations found by tests built on simple null hypotheses are highly reproducible in a second independent data set regardless the number of markers. As a test exhibiting this feature to its maximum, we introduce the multimarker -Groups TDT (), a test which under the hypothesis of no linkage, asymptotically follows a distribution with degree of freedom regardless the number of markers. The statistic requires the division of parental haplotypes into two groups: disease susceptibility and disease protective haplotype groups. We assessed the test behavior by performing an extensive simulation study as well as a real-data study using several data sets of two complex diseases. We show that test is highly efficient and it achieves the highest power among all the tests used, even when the null hypothesis is tested in a second independent data set. Therefore, turns out to be a very promising multimarker TDT to perform genome-wide searches for disease susceptibility loci that may be used as a preprocessing step in the construction of more accurate genetic models to predict individual susceptibility to complex diseases

    Whole genome association mapping by incompatibilities and local perfect phylogenies

    Get PDF
    BACKGROUND: With current technology, vast amounts of data can be cheaply and efficiently produced in association studies, and to prevent data analysis to become the bottleneck of studies, fast and efficient analysis methods that scale to such data set sizes must be developed. RESULTS: We present a fast method for accurate localisation of disease causing variants in high density case-control association mapping experiments with large numbers of cases and controls. The method searches for significant clustering of case chromosomes in the "perfect" phylogenetic tree defined by the largest region around each marker that is compatible with a single phylogenetic tree. This perfect phylogenetic tree is treated as a decision tree for determining disease status, and scored by its accuracy as a decision tree. The rationale for this is that the perfect phylogeny near a disease affecting mutation should provide more information about the affected/unaffected classification than random trees. If regions of compatibility contain few markers, due to e.g. large marker spacing, the algorithm can allow the inclusion of incompatibility markers in order to enlarge the regions prior to estimating their phylogeny. Haplotype data and phased genotype data can be analysed. The power and efficiency of the method is investigated on 1) simulated genotype data under different models of disease determination 2) artificial data sets created from the HapMap ressource, and 3) data sets used for testing of other methods in order to compare with these. Our method has the same accuracy as single marker association (SMA) in the simplest case of a single disease causing mutation and a constant recombination rate. However, when it comes to more complex scenarios of mutation heterogeneity and more complex haplotype structure such as found in the HapMap data our method outperforms SMA as well as other fast, data mining approaches such as HapMiner and Haplotype Pattern Mining (HPM) despite being significantly faster. For unphased genotype data, an initial step of estimating the phase only slightly decreases the power of the method. The method was also found to accurately localise the known susceptibility variants in an empirical data set – the ΔF508 mutation for cystic fibrosis – where the susceptibility variant is already known – and to find significant signals for association between the CYP2D6 gene and poor drug metabolism, although for this dataset the highest association score is about 60 kb from the CYP2D6 gene. CONCLUSION: Our method has been implemented in the Blossoc (BLOck aSSOCiation) software. Using Blossoc, genome wide chip-based surveys of 3 million SNPs in 1000 cases and 1000 controls can be analysed in less than two CPU hours

    The emergence and current performance of a health research system: lessons from Guinea Bissau

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Little is known about how health research systems (HRS) in low-income countries emerge and evolve over time, and how this process relates to their performance. Understanding how HRSs emerge is important for the development of well functioning National Health Research Systems (NHRS). The aim of this study was to assess how the HRS in Guinea Bissau has emerged and evolved over time and how the present system functions.</p> <p>Methods</p> <p>We used a qualitative case-study methodology to explore the emergence and current performance of the HRS, using the NHRS framework. We reviewed documents and carried out 39 in-depth interviews, ranging from health research to policy and practice stakeholders. Using an iterative approach, we undertook a thematic analysis of the data.</p> <p>Results</p> <p>The research practices in Guinea Bissau led to the emergence of a HRS with both local and international links and strong dependencies on international partners and donors. The post-colonial, volatile and resource-dependent context, changes in donor policies, training of local researchers and nature of the research findings influenced how the HRS evolved. Research priorities have mostly been set by 'expatriate' researchers and focused on understanding and reducing child mortality. Research funding is almost exclusively provided by foreign donors and international agencies. The training of Guinean researchers started in the mid-nineties and has since reinforced the links with the health system, broadened the research agenda and enhanced local use of research. While some studies have made an important contribution to global health, the use of research within Guinea Bissau has been constrained by the weak and donor dependent health system, volatile government, top-down policies of international agencies, and the controversial nature of some of the research findings.</p> <p>Conclusions</p> <p>In Guinea Bissau a de facto 'system' of research has emerged through research practices and co-evolving national and international research and development dynamics. If the aim of research is to contribute to local decision making, it is essential to modulate the emerged system by setting national research priorities, aligning funding, building national research capacity and linking research to decision making processes. Donors and international agencies can contribute to this process by coordinating their efforts and aligning to national priorities.</p

    Probabilistic Explanation Based Learning

    No full text
    Abstract. Explanation based learning produces generalized explanations from examples. These explanations are typically built in a deductive manner and they aim to capture the essential characteristics of the examples. Probabilistic explanation based learning extends this idea to probabilistic logic representations, which have recently become popular within the field of statistical relational learning. The task is now to find the most likely explanation why one (or more) example(s) satisfy a given concept. These probabilistic and generalized explanations can then be used to discover similar examples and to reason by analogy. So, whereas traditional explanation based learning is typically used for speed-up learning, probabilistic explanation based learning is used for discovering new knowledge. Probabilistic explanation based learning has been implemented in a recently proposed probabilistic logic called ProbLog, and it has been applied to a challenging application in discovering relationships of interest in large biological networks.
    corecore