8 research outputs found
Recommended from our members
Core structure heat-up and material relocation in a BWR short-term station blackout accident
This paper presents an analytical and numerical analysis which evaluates the core-structure heat-up and subsequent relocation of molten core materials during a NWR short-term station blackout accident with ADS. A simplified one-dimensional approach coupled with bounding arguments is first presented to establish an estimate of the temperature differences within a BWR assembly at the point when structural material first begins to melt. This analysis leads to the conclusions that the control blade will be the first structure to melt and that at this point in time, overall temperature differences across the canister-blade region will not be more than 200 K. Next, a three-dimensional heat-transfer model of the canister-blade region within the core is presented that uses a diffusion approximation for the radiation heat transfer. This is compared to the one-dimensional analysis to establish its compatibility. Finally, the extension of the three-dimensional model to include melt relocation using a porous media type approximation is described. The results of this analysis suggest that under these conditions significant amounts of material will relocate to the core plate region and refreeze, potentially forming a significant blockage. The results also indicate that a large amount of lateral spreading of the melted blade and canister material into the fuel rod regions will occur during the melt progression process. 22 refs., 18 figs., 1 tab
Recommended from our members
Massively Parallel Computing: A Sandia Perspective
The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry
The international exascale software project roadmap
Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project. </jats:p
Biological and Environmental Research Exascale Requirements Review
The article of record as published may be found at http://dx.doi.org/10.2172/1375720An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Biological and Environmental Research, March 28-31, 2016, Rockville, MarylandUnderstanding the fundamentals of genomic systems or the processes governing impactful weather patterns are examples of the types of simulation and modeling performed on the most advanced computing resources in America. High-performance computing and computational science together provide a necessary platform for the mission science conducted by the Biological and Environmental Research (BER) office at the U.S. Department of Energy (DOE). This report reviews BER’s computing needs and their importance for solving some of the toughest problems in BER’s portfolio. BER’s impact on science has been transformative. Mapping the human genome, including the U.S.-supported international Human Genome Project that DOE began in 1987, initiated the era of modern biotechnology and genomics-based systems biology. And since the 1950s, BER has been a core contributor to atmospheric, environmental, and climate science research, beginning with atmospheric circulation studies that were the forerunners of modern Earth system models (ESMs) and by pioneering the implementation of climate codes onto high-performance computers. See http://exascaleage.org/ber/ for more information.USDOE Office of Science (SC), Advanced Scientific Computing Research (SC-21)USDOE Office of Science (SC), Biological and Environmental Research (BER) (SC-23