7 research outputs found

    Contribution Ă  l’évaluation des incertitudes sur les sections efficaces neutroniques, pour les rĂ©acteurs Ă  neutrons rapides

    Get PDF
    The thesis has been motivated by a wish to increase the uncertainty knowledge on nuclear data, for safety criteria. It aims the cross sections required by core calculation for sodium fast reactors (SFR), and new tools to evaluate its.The main objective of this work is to provide new tools in order to create coherent evaluated files, with reliable and mastered uncertainties. To answer those problematic, several methods have been implemented within the CONRAD code, which is developed at CEA of Cadarache.After a summary of all the elements required to understand the evaluation world, stochastic methods are presented in order to solve the Bayesian inference. They give the evaluator more information about probability density and they also can be used as validation tools. The algorithms have been successfully tested, despite long calculation time.Then, microscopic constraints have been implemented in CONRAD. They are defined as new information that should be taken into account during the evaluation process. An algorithm has been developed in order to solve, for example, continuity issues between two energy domains, with the Lagrange multiplier formalism. Another method is given by using a marginalization procedure, in order to either complete an existing evaluation with new covariance or add systematic uncertainty on an experiment described by two theories. The algorithms are well performed along examples, such the 238U total cross section.The last parts focus on the integral data feedback, using methods of integral data assimilation to reduce the uncertainties on cross sections. This work ends with uncertainty reduction on key nuclear reactions, such the capture and fission cross sections of 238U and 239Pu, thanks to PROFIL and PROFIL-2 experiments in PhĂ©nix and the Jezebel benchmark.La thĂšse a essentiellement Ă©tĂ© motivĂ©e par la volontĂ© croissante de maĂźtriser les incertitudes des donnĂ©es nuclĂ©aires, pour des raisons de sĂ»retĂ© nuclĂ©aire. Elle vise en particulier les sections efficaces indispensables aux calculs neutroniques des rĂ©acteurs rapides au sodium de GĂ©nĂ©ration IV (RNR-Na), et les moyens permettant de les Ă©valuer.Le principal objectif de la thĂšse est de fournir et montrer l’intĂ©rĂȘt de nouveaux outils permettant de rĂ©aliser des Ă©valuations cohĂ©rentes, avec des incertitudes maĂźtrisĂ©es et fiables. Pour rĂ©pondre aux attentes, diffĂ©rentes mĂ©thodes ont Ă©tĂ© implĂ©mentĂ©es dans le cadre du code CONRAD, dĂ©veloppĂ© au CEA de Cadarache, au DĂ©partement d’Étude des RĂ©acteurs.AprĂšs l’état des lieux et la prĂ©sentation des diffĂ©rents Ă©lĂ©ments nĂ©cessaires pour effectuer une Ă©valuation, il est prĂ©sentĂ© des rĂ©solutions stochastiques de l’infĂ©rence BayĂ©sienne. Elles permettent de fournir d’une part, des informations supplĂ©mentaires Ă  l’évaluateur par rapport Ă  la rĂ©solution analytique et d’autre part, de valider cette derniĂšre. Les algorithmes ont Ă©tĂ© testĂ©s avec succĂšs Ă  travers plusieurs cas, malgrĂ© des temps de calcul plus longs faute aux mĂ©thodes de type Monte Carlo.Ensuite, ce travail a rendu possible, dans CONRAD, de prendre en compte des contraintes dites microscopiques. Elles sont dĂ©finies par l’ajout ou le traitement d’informations additionnelles par rapport Ă  l’évaluation traditionnelle. Il a Ă©tĂ© dĂ©veloppĂ© un algorithme basĂ© sur le formalisme des multiplicateurs de Lagrange pour rĂ©soudre les problĂšmes de continuitĂ© entre deux domaines en Ă©nergies traitĂ©es par deux thĂ©ories diffĂ©rentes. De plus, d’autres approches sont prĂ©sentĂ©es, avec notamment l’utilisation de la marginalisation, permettant soit de complĂ©ter une Ă©valuation existante en ajoutant des matrices de covariance, soit de considĂ©rer une incertitude systĂ©matique pour une expĂ©rience dĂ©crite par deux thĂ©ories. Le bon fonctionnement des diffĂ©rentes mĂ©thodes implĂ©mentĂ©es est illustrĂ© par des exemples, dont celui de la section efficace totale de l’238U.Enfin, les derniĂšres parties de la thĂšse se focalisent sur le retour des expĂ©riences intĂ©grales, par mĂ©thodes d’assimilation de donnĂ©es intĂ©grales. Cela permet de rĂ©duire les incertitudes sur les sections efficaces d’intĂ©rĂȘt pour les rĂ©acteurs rapides. Ce document se clĂŽt par la prĂ©sentation de quelques rĂ©sultats clefs sur les sections efficaces de l’238U et du 239Pu, avec la considĂ©ration d’expĂ©riences comme PROFIL et PROFIL-2 dans PhĂ©nix ou encore Jezebel

    Contribution to uncertainties evaluation for fast reactors neutronic cross sections

    No full text
    La thĂšse a essentiellement Ă©tĂ© motivĂ©e par la volontĂ© croissante de maĂźtriser les incertitudes des donnĂ©es nuclĂ©aires, pour des raisons de sĂ»retĂ© nuclĂ©aire. Elle vise en particulier les sections efficaces indispensables aux calculs neutroniques des rĂ©acteurs rapides au sodium de GĂ©nĂ©ration IV (RNR-Na), et les moyens permettant de les Ă©valuer.Le principal objectif de la thĂšse est de fournir et montrer l’intĂ©rĂȘt de nouveaux outils permettant de rĂ©aliser des Ă©valuations cohĂ©rentes, avec des incertitudes maĂźtrisĂ©es et fiables. Pour rĂ©pondre aux attentes, diffĂ©rentes mĂ©thodes ont Ă©tĂ© implĂ©mentĂ©es dans le cadre du code CONRAD, dĂ©veloppĂ© au CEA de Cadarache, au DĂ©partement d’Étude des RĂ©acteurs.AprĂšs l’état des lieux et la prĂ©sentation des diffĂ©rents Ă©lĂ©ments nĂ©cessaires pour effectuer une Ă©valuation, il est prĂ©sentĂ© des rĂ©solutions stochastiques de l’infĂ©rence BayĂ©sienne. Elles permettent de fournir d’une part, des informations supplĂ©mentaires Ă  l’évaluateur par rapport Ă  la rĂ©solution analytique et d’autre part, de valider cette derniĂšre. Les algorithmes ont Ă©tĂ© testĂ©s avec succĂšs Ă  travers plusieurs cas, malgrĂ© des temps de calcul plus longs faute aux mĂ©thodes de type Monte Carlo.Ensuite, ce travail a rendu possible, dans CONRAD, de prendre en compte des contraintes dites microscopiques. Elles sont dĂ©finies par l’ajout ou le traitement d’informations additionnelles par rapport Ă  l’évaluation traditionnelle. Il a Ă©tĂ© dĂ©veloppĂ© un algorithme basĂ© sur le formalisme des multiplicateurs de Lagrange pour rĂ©soudre les problĂšmes de continuitĂ© entre deux domaines en Ă©nergies traitĂ©es par deux thĂ©ories diffĂ©rentes. De plus, d’autres approches sont prĂ©sentĂ©es, avec notamment l’utilisation de la marginalisation, permettant soit de complĂ©ter une Ă©valuation existante en ajoutant des matrices de covariance, soit de considĂ©rer une incertitude systĂ©matique pour une expĂ©rience dĂ©crite par deux thĂ©ories. Le bon fonctionnement des diffĂ©rentes mĂ©thodes implĂ©mentĂ©es est illustrĂ© par des exemples, dont celui de la section efficace totale de l’238U.Enfin, les derniĂšres parties de la thĂšse se focalisent sur le retour des expĂ©riences intĂ©grales, par mĂ©thodes d’assimilation de donnĂ©es intĂ©grales. Cela permet de rĂ©duire les incertitudes sur les sections efficaces d’intĂ©rĂȘt pour les rĂ©acteurs rapides. Ce document se clĂŽt par la prĂ©sentation de quelques rĂ©sultats clefs sur les sections efficaces de l’238U et du 239Pu, avec la considĂ©ration d’expĂ©riences comme PROFIL et PROFIL-2 dans PhĂ©nix ou encore Jezebel.The thesis has been motivated by a wish to increase the uncertainty knowledge on nuclear data, for safety criteria. It aims the cross sections required by core calculation for sodium fast reactors (SFR), and new tools to evaluate its.The main objective of this work is to provide new tools in order to create coherent evaluated files, with reliable and mastered uncertainties. To answer those problematic, several methods have been implemented within the CONRAD code, which is developed at CEA of Cadarache.After a summary of all the elements required to understand the evaluation world, stochastic methods are presented in order to solve the Bayesian inference. They give the evaluator more information about probability density and they also can be used as validation tools. The algorithms have been successfully tested, despite long calculation time.Then, microscopic constraints have been implemented in CONRAD. They are defined as new information that should be taken into account during the evaluation process. An algorithm has been developed in order to solve, for example, continuity issues between two energy domains, with the Lagrange multiplier formalism. Another method is given by using a marginalization procedure, in order to either complete an existing evaluation with new covariance or add systematic uncertainty on an experiment described by two theories. The algorithms are well performed along examples, such the 238U total cross section.The last parts focus on the integral data feedback, using methods of integral data assimilation to reduce the uncertainties on cross sections. This work ends with uncertainty reduction on key nuclear reactions, such the capture and fission cross sections of 238U and 239Pu, thanks to PROFIL and PROFIL-2 experiments in PhĂ©nix and the Jezebel benchmark

    On the use of the BMC to resolve Bayesian inference with nuisance parameters

    Get PDF
    Nuclear data are widely used in many research fields. In particular, neutron-induced reaction cross sections play a major role in safety and criticality assessment of nuclear technology for existing power reactors and future nuclear systems as in Generation IV. Because both stochastic and deterministic codes are becoming very efficient and accurate with limited bias, nuclear data remain the main uncertainty sources. A worldwide effort is done to make improvement on nuclear data knowledge thanks to new experiments and new adjustment methods in the evaluation processes. This paper gives an overview of the evaluation processes used for nuclear data at CEA. After giving Bayesian inference and associated methods used in the CONRAD code [P. Archier et al., Nucl. Data Sheets 118, 488 (2014)], a focus on systematic uncertainties will be given. This last can be deal by using marginalization methods during the analysis of differential measurements as well as integral experiments. They have to be taken into account properly in order to give well-estimated uncertainties on adjusted model parameters or multigroup cross sections. In order to give a reference method, a new stochastic approach is presented, enabling marginalization of nuisance parameters (background, normalization...). It can be seen as a validation tool, but also as a general framework that can be used with any given distribution. An analytic example based on a fictitious experiment is presented to show the good ad-equations between the stochastic and deterministic methods. Advantages of such stochastic method are meanwhile moderated by the time required, limiting it's application for large evaluation cases. Faster calculation can be foreseen with nuclear model implemented in the CONRAD code or using bias technique. The paper ends with perspectives about new problematic and time optimization

    On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    No full text
    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ∌ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented

    On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    No full text
    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ∌ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented
    corecore