41,037 research outputs found

    Trajectory Mapping and Applications to Data from the Upper Atmosphere Research Satellite

    Get PDF
    The problem of creating synoptic maps from asynoptically gathered trace gas data has prompted the development of a number of schemes. Most notable among these schemes are the Kalman filter, the Salby-Fourier technique, and constituent reconstruction. This paper explores a new technique called trajectory mapping. Trajectory mapping creates synoptic maps from asynoptically gathered data by advecting measurements backward or forward in time using analyzed wind fields. A significant portion of this work is devoted to an analysis of errors in synoptic trajectory maps associated with the calculation of individual parcel trajectories. In particular, we have considered (1) calculational errors; (2) uncertainties in the values and locations of constituent measurements, (3) errors incurred by neglecting diabatic effects, and (4) sensitivity to differences in wind field analyses. These studies reveal that the global fields derived from the advection of large numbers of measurements are relatively insensitive to the errors in the individual trajectories. The trajectory mapping technique has been successfully applied to a variety of problems. In this paper, the following two applications demonstrate the usefulness of the technique: an analysis of dynamical wave-breaking events and an examination of Upper Atmosphere Research Satellite data accuracy

    Dose calculations in aircrafts after Fukushima nuclear power plant accident – Preliminary study for aviation operations

    Get PDF
    There is little information to decision support in air traffic management in case of nuclear releases into the atmosphere. In this paper, the dose estimation due to both, external exposure (i.e. cloud immersion, deposition inside and outside the aircraft), and due to internal exposure (i.e, inhalation of radionuclides inside the aircraft) to passengers and crew is calculated for a worst-case emergency scenario. The doses are calculated for different radionuclides and activities. Calculations are mainly considered according to International Commission on Radiological Protection (ICRP) recommendations and Monte Carlo simulations. In addition, a discussion on potential detectors installed inside the aircraft for monitoring the aerosol concentration and the ambient dose equivalent rate, H*(10), for during-flight monitoring and early warning is provided together with the evaluation of a response of a generic detector. The results show that the probability that a catastrophic nuclear accident would produce significant radiological doses to the passengers and crew of an aircraft is very low. In the worst-case scenarios studied, the maximum estimated effective dose was about 1¿mSv during take-off or landing operations, which is the recommended yearly threshold for the public. However, in order to follow the ALARA (As Low As Reasonably Achievable) criteria and to avoid aircraft contamination, the installation of radiological detectors is considered. This would, on one hand help the pilot or corresponding decision maker to decide about the potential change of the route and, on the other, allow for gathering of 4D data for future studiesPostprint (published version

    A Reconfigurable Vector Instruction Processor for Accelerating a Convection Parametrization Model on FPGAs

    Full text link
    High Performance Computing (HPC) platforms allow scientists to model computationally intensive algorithms. HPC clusters increasingly use General-Purpose Graphics Processing Units (GPGPUs) as accelerators; FPGAs provide an attractive alternative to GPGPUs for use as co-processors, but they are still far from being mainstream due to a number of challenges faced when using FPGA-based platforms. Our research aims to make FPGA-based high performance computing more accessible to the scientific community. In this work we present the results of investigating the acceleration of a particular atmospheric model, Flexpart, on FPGAs. We focus on accelerating the most computationally intensive kernel from this model. The key contribution of our work is the architectural exploration we undertook to arrive at a solution that best exploits the parallelism available in the legacy code, and is also convenient to program, so that eventually the compilation of high-level legacy code to our architecture can be fully automated. We present the three different types of architecture, comparing their resource utilization and performance, and propose that an architecture where there are a number of computational cores, each built along the lines of a vector instruction processor, works best in this particular scenario, and is a promising candidate for a generic FPGA-based platform for scientific computation. We also present the results of experiments done with various configuration parameters of the proposed architecture, to show its utility in adapting to a range of scientific applications.Comment: This is an extended pre-print version of work that was presented at the international symposium on Highly Efficient Accelerators and Reconfigurable Technologies (HEART2014), Sendai, Japan, June 911, 201
    • …
    corecore