6,460 research outputs found

    An Effective Methodology for Thermal-Hydraulics Analysis of a VHTR Core and Fuel Elements

    Get PDF
    The Very High Temperature Reactor (VHTR) is a Generation-IV design in the conceptual pre-licensing phase for potential construction by 2030-2050. It is graphite moderated, helium cooled reactor that operates at an exit temperature of up to 1273 K, making it ideal for generating electricity at a plant thermal efficiency upwards of 48% and the co-generation of process heat for hydrogen production and other industrial uses. Extensive thermal-hydraulics and safety analyses of VHTRs are being conducted using Computational Fluid Dynamics (CFD) and heat transfer codes, in conjunction with experiments and prototype demonstrations. These analyses are challenging, largely due to the 3-D simulation of the helium flow in the 10 m long coolant channels in the reactor core and the need to examine the effects of helium bypass flow in the interstitial gaps between the core fuel elements. This research, performed at the UNM-ISNPS, developed an effective thermal-hydraulics analyses methodology that markedly reduces the numerical meshing requirements and computational time. It couples the heliums 1-D convective flow and heat transfer in the channels to 3-D heat conduction in graphite and fuel compacts of VHTR fuel elements. Besides the helium local bulk temperature, the heat transfer coefficient is calculated using a Nusselt number correlation, developed and validated in this work. In addition to omitting the numerical meshing in the coolant channels, the simplified analysis methodology effectively decreases the total computation time by a factor of ~ 33 - 40 with little effect on the calculated temperatures (\u3c 5 K), compared to a full 3-D thermal-hydraulics analysis. The developed convective heat transfer correlation accounts for the effect of entrance mixing in the coolant channels, where z/D \u3c 25. The correlation compares favorably, to within + 12%, with Taylor\u27s (based on high temperature hydrogen heat transfer) and to within + 2% of the calculated results for full 3-D analyses of a VHTR single channel module and multiple channels in the fuel elements. The simplified methodology is used to investigate the effects of helium bypass flow in interstitial gaps between fuel elements and of the helium bleed flow in control rod channels on calculated temperatures in the VHTR fuel elements. Thermal-hydraulics analysis of a one-element high and of a full height VHTR 1/6 core are also conducted. Results show that the interstitial bypass flow increases the temperatures near the center of the core fuel elements by 10-15 K, while reducing the temperatures along the edges of the elements by ~30 K. Without bypass flow, hotspots may occur at the location of burnable poison rods in the fuel elements, depending on the assumed volumetric heat generation rate in the rods. The helium bleed flow through the control rod channels reduces temperatures near them by 2-5 K, and only slightly increases the temperatures within the rest of the core fuel elements. In the VHTR 1/6 core thermal-hydraulics analysis, the helium bypass flow decreases the heat transfer from the core fuel elements to the adjacent radial graphite reflector blocks. Results demonstrate the effectiveness of the developed methodology and its potential use in future thermal-hydraulics design and in the safety analyses of VHTRs

    A static heap analysis for shape and connectivity: Unified memory analysis: The base framework

    Get PDF
    Modeling the evolution of the state of program memory during program execution is critical to many parallehzation techniques. Current memory analysis techniques either provide very accurate information but run prohibitively slowly or produce very conservative results. An approach based on abstract interpretation is presented for analyzing programs at compile time, which can accurately determine many important program properties such as aliasing, logical data structures and shape. These properties are known to be critical for transforming a single threaded program into a versión that can be run on múltiple execution units in parallel. The analysis is shown to be of polynomial complexity in the size of the memory heap. Experimental results for benchmarks in the Jolden suite are given. These results show that in practice the analysis method is efflcient and is capable of accurately determining shape information in programs that créate and manipúlate complex data structures

    AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient

    Get PDF
    Adversarial training is exploited to develop a robust Deep Neural Network (DNN) model against the malicious altered data. These attacks may have catastrophic effects on DNN models but are indistinguishable for a human being. For example, an external attack can modify an image adding noises invisible for a human eye, but a DNN model misclassifies the image. A key objective for developing robust DNN models is to use a learning algorithm that is fast but can also give model that is robust against different types of adversarial attacks. Especially for adversarial training, enormously long training times are needed for obtaining high accuracy under many different types of adversarial samples generated using different adversarial attack techniques. This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks. The general method for improving the training performance is the hyperparameters fine-tuning, where the learning rate is one of the most crucial hyperparameters. By modifying its shape (the value over time) and value during the training, we can obtain a model robust to adversarial attacks faster than standard training. First, we conduct experiments on two different datasets (CIFAR10, CIFAR100), exploring various techniques. Then, this analysis is leveraged to develop a novel fast training methodology, AccelAT , which automatically adjusts the learning rate for different epochs based on the accuracy gradient. The experiments show comparable results with the related works, and in several experiments, the adversarial training of DNNs using our AccelAT framework is conducted up to 2×2\times faster than the existing techniques. Thus, our findings boost the speed of adversarial training in an era in which security and performance are fundamental optimization objectives in DNN-based applications. To facilitate reproducible research this is the AccelAT open-source framework: https://github.com/Nikfam/AccelAT

    Solid rocket booster internal flow analysis by highly accurate adaptive computational methods

    Get PDF
    The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described

    Parallel Optimisations of Perceived Quality Simulations

    Get PDF
    Processor architectures have changed significantly, with fast single core processors replaced by a diverse range of multicore processors. New architectures require code to be executed in parallel to realize these performance gains. This is straightforward for small applications, where analysis and refactoring is simple or existing tools can parallelise automatically, however the organic growth and large complicated data structures common in mature industrial applications can make parallelisation more difficult. One such application is studied, a mature Windows C++ application used for the visualisation of Perceived Quality (PQ). PQ simulations enable the visualisation of how manufacturing variations affect the look of the final product. The application is commonly used, however suffers from performance issues. Previous parallelisation attempts have failed. The issues associated with parallelising a mature industrial application are investigated. A methodology to investigate, analyse and evaluate the methods and tools available is produced. The shortfalls of these methods and tools are identified, and the methods used to overcome them explained. The parallel version of the software is evaluated for performance. Case studies centring on the significant use cases of the application help to understand the impact on the user. Automated compilers provided no parallelism, while the manual parallelisation using OpenMP required significant refactoring. A number of data dependency issues resulted in some serialised code. Performance scaled with the number of physical cores when applied to certain problems, however the unresolved bottlenecks resulted in mixed results for users. Use in verification did benefit, however those in early design stages did not. Without tools to aid analysis of complex data structures, parallelism could remain out of reach for industrial applications. Methods used here successfully, such as serialisation, and code isolation and serialisation, could be used effectively by such tools

    Analysis of hybrid parallelization strategies: simulation of Anderson localization and Kalman Filter for LHCb triggers

    Get PDF
    This thesis presents two experiences of hybrid programming applied to condensed matter and high energy physics. The two projects differ in various aspects, but both of them aim to analyse the benefits of using accelerated hardware to speedup the calculations in current science-research scenarios. The first project enables massively parallelism in a simulation of the Anderson localisation phenomenon in a disordered quantum system. The code represents a Hamiltonian in momentum space, then it executes a diagonalization of the corresponding matrix using linear algebra libraries, and finally it analyses the energy-levels spacing statistics averaged over several realisations of the disorder. The implementation combines different parallelization approaches in an hybrid scheme. The averaging over the ensemble of disorder realisations exploits massively parallelism with a master-slave configuration based on both multi-threading and message passing interface (MPI). This framework is designed and implemented to easily interface similar application commonly adopted in scientific research, for example in Monte Carlo simulations. The diagonalization uses multi-core and GPU hardware interfacing with MAGMA, PLASMA or MKL libraries. The access to the libraries is modular to guarantee portability, maintainability and the extension in a near future. The second project is the development of a Kalman Filter, including the porting on GPU architectures and autovectorization for online LHCb triggers. The developed codes provide information about the viability and advantages for the application of GPU technologies in the first triggering step for Large Hadron Collider beauty experiment (LHCb). The optimisation introduced on both codes for CPU and GPU delivered a relevant speedup on the Kalman Filter. The two GPU versions in CUDA R and OpenCLTM have similar performances and are adequate to be considered in the upgrade and in the corresponding implementations of the Gaudi framework. In both projects we implement optimisation techniques in the CPU code. This report presents extensive benchmark analyses of the correctness and of the performances for both projects
    • …
    corecore