4,158 research outputs found

    Fission fragment assisted reactor concept for space propulsion: Foil reactor

    Get PDF
    The concept is to fabricate a reactor using thin films or foils of uranium, uranium oxide and then to coat them on substrates. These coatings would be made so thin as to allow the escaping fission fragments to directly heat a hydrogen propellant. The idea was studied of direct gas heating and direct gas pumping in a nuclear pumped laser program. Fission fragments were used to pump lasers. In this concept two substrates are placed opposite each other. The internal faces are coated with thin foil of uranium oxide. A few of the advantages of this technology are listed. In general, however, it is felt that if one look at all solid core nuclear thermal rockets or nuclear thermal propulsion methods, one is going to find that they all pretty much look the same. It is felt that this reactor has higher potential reliability. It has low structural operating temperatures, very short burn times, with graceful failure modes, and it has reduced potential for energetic accidents. Going to a design like this would take the NTP community part way to some of the very advanced engine designs, such as the gas core reactor, but with reduced risk because of the much lower temperatures

    Tree Adventure Passport - A Family Activity Guide

    Get PDF

    Calibration of a photomultiplier array spectrometer

    Get PDF
    A systematic approach to the calibration of a photomultiplier array spectrometer is presented. Through this approach, incident light radiance derivation is made by recognizing and tracing gain characteristics for each photomultiplier tube

    Towards a portable and future-proof particle-in-cell plasma physics code

    Get PDF
    We present the first reported OpenCL implementation of EPOCH3D, an extensible particle-in-cell plasma physics code developed at the University of Warwick. We document the challenges and successes of this porting effort, and compare the performance of our implementation executing on a wide variety of hardware from multiple vendors. The focus of our work is on understanding the suitability of existing algorithms for future accelerator-based architectures, and identifying the changes necessary to achieve performance portability for particle-in-cell plasma physics codes. We achieve good levels of performance with limited changes to the algorithmic behaviour of the code. However, our results suggest that a fundamental change to EPOCH3D’s current accumulation step (and its dependency on atomic operations) is necessary in order to fully utilise the massive levels of parallelism supported by emerging parallel architectures

    On the acceleration of wavefront applications using distributed many-core architectures

    Get PDF
    In this paper we investigate the use of distributed graphics processing unit (GPU)-based architectures to accelerate pipelined wavefront applications—a ubiquitous class of parallel algorithms used for the solution of a number of scientific and engineering applications. Specifically, we employ a recently developed port of the LU solver (from the NAS Parallel Benchmark suite) to investigate the performance of these algorithms on high-performance computing solutions from NVIDIA (Tesla C1060 and C2050) as well as on traditional clusters (AMD/InfiniBand and IBM BlueGene/P). Benchmark results are presented for problem classes A to C and a recently developed performance model is used to provide projections for problem classes D and E, the latter of which represents a billion-cell problem. Our results demonstrate that while the theoretical performance of GPU solutions will far exceed those of many traditional technologies, the sustained application performance is currently comparable for scientific wavefront applications. Finally, a breakdown of the GPU solution is conducted, exposing PCIe overheads and decomposition constraints. A new k-blocking strategy is proposed to improve the future performance of this class of algorithm on GPU-based architectures

    An investigation of the performance portability of OpenCL

    Get PDF
    This paper reports on the development of an MPI/OpenCL implementation of LU, an application-level benchmark from the NAS Parallel Benchmark Suite. An account of the design decisions addressed during the development of this code is presented, demonstrating the importance of memory arrangement and work-item/work-group distribution strategies when applications are deployed on different device types. The resulting platform-agnostic, single source application is benchmarked on a number of different architectures, and is shown to be 1.3–1.5× slower than native FORTRAN 77 or CUDA implementations on a single node and 1.3–3.1× slower on multiple nodes. We also explore the potential performance gains of OpenCL’s device fissioning capability, demonstrating up to a 3× speed-up over our original OpenCL implementation

    Performance of a second order electrostatic particle-in-cell algorithm on modern many-core architectures

    Get PDF
    In this paper we present the outline of a novel electrostatic, second order Particle-in-Cell (PIC) algorithm, that makes use of 'ghost particles' located around true particle positions in order to represent a charge distribution. We implement our algorithm within EMPIRE-PIC, a PIC code developed at Sandia National Laboratories. We test the performance of our algorithm on a variety of many-core architectures including NVIDIA GPUs, conventional CPUs, and Intel's Knights Landing. Our preliminary results show the viability of second order methods for PIC applications on these architectures when compared to previous generations of many-core hardware. Specifically, we see an order of magnitude improvement in performance for second order methods between the Tesla K20 and Tesla P100 GPU devices, despite only a 4× improvement in the theoretical peak performance between the devices. Although these initial results show a large increase in runtime over first order methods, we hope to be able to show improved scaling behaviour and increased simulation accuracy in the future

    Understanding communication patterns in HPCG

    Get PDF
    Conjugate Gradient (CG) algorithms form a large part of many HPC applications, examples include bioinformatics and weather applications. These algorithms allow numerical solutions to complex linear systems. Understanding how distributed implementations of these algorithms use a network interconnect will allow system designers to gain a deeper insight into their exacting requirements for existing and future applications. This short paper documents our initial investigation into the communication patterns present in the High Performance Conjugate Gradient (HPCG) benchmark. Through our analysis, we identify patterns and features which may warrant further investigation to improve the performance of CG algorithms and applications which make extensive use of them. In this paper, we capture communication traces from runs of the HPCG benchmark at a variety of different processor counts and then examine this data to identify potential performance bottlenecks. Initial results show that there is a fall in the throughput of the network when more processes are communicating with each other, due to network contention

    Iowa Food Security, Insecurity and Hunger—Emergency Food Resources: Meeting Food Needs of Iowa Households

    Get PDF
    Report of an ISU Extension study of people who used food pantries in Polk, Scott, Decatur, and Monroe counties in 2002.https://lib.dr.iastate.edu/extension_communities_pubs/1013/thumbnail.jp

    A model of meta-population dynamics for North Sea and West of Scotland cod - the dynamic consequences of natal fidelity

    Get PDF
    It is clear from a variety of data that cod (Gadus morhua) in the North Sea do not constitute a homogeneous population that will rapidly redistribute in response to local variability in exploitation. Hence, local exploitation has the potential to deplete local populations, perhaps to the extent that depensation occurs and recovery is impossible without recolonisation from other areas, with consequent loss of genetic diversity. The oceanographic, biological and behavioural processes which maintain the spatial population structures are only partly understood, and one of the key unknown factors is the extent to which codexhibit homing migrations to natal spawning areas. Here, we describe a model comprising 10 interlinked demes of cod in European waters, each representing groups of fish with a common natal origin. The spawning locations of fish in each deme are governed by a variety of rules concerning oceanographic dispersal, migration behaviour and straying. We describe numerical experiments with the model and comparisons with observations, which lead us to conclude that active homing is probably not necessary to explain some of the population structures of European cod. Separation of some sub-populations is possible through distance and oceanographic processes affecting the dispersal of eggs and larvae. However, other evidence suggests that homing may be a necessary behaviour to explain the structure of other sub-populations. Theconsequences for fisheries management of taking into account spatial population structuring are complicated. For example, recovery or recolonisation strategies require consideration not only of mortality rates in the target area for restoration, but also in the source areas for the recruits which may be far removed depending on the oceanography. The model has an inbuilt capability to address issues concerning the effects of climate change, including temperature change, on spatial patterns of recruitment, development and population structure in cod
    corecore