32 research outputs found
Benchmark Chemical Systems and Simulation Parameters
Performance results were measured for two chemical systems that are commonly used for MD code benchmarking, and one additional system that is characteristic for free energy perturbation (FEP) simulations
Recommended from our members
Simulation of lean premixed turbulent combustion
There is considerable technological interest in developingnew fuel-flexible combustion systems that can burn fuels such ashydrogenor syngas. Lean premixed systems have the potential to burn thesetypes of fuels with high efficiency and low NOx emissions due to reducedburnt gas temperatures. Although traditional scientific approaches basedon theory and laboratory experiment have played essential roles indeveloping our current understanding of premixed combustion, they areunable to meet the challenges of designing fuel-flexible lean premixedcombustion devices. Computation, with itsability to deal with complexityand its unlimited access to data, hasthe potential for addressing thesechallenges. Realizing this potential requires the ability to perform highfidelity simulations of turbulent lean premixed flames under realisticconditions. In this paper, we examine the specialized mathematicalstructure of these combustion problems and discuss simulation approachesthat exploit this structure. Using these ideas we can dramatically reducecomputational cost, making it possible to perform high-fidelitysimulations of realistic flames. We illustrate this methodology byconsidering ultra-lean hydrogen flames and discuss how this type ofsimulation is changing the way researchers study combustion
A scalable parallel framework for analyzing terascale molecular dynamics simulation trajectories
Abstract—As parallel algorithms and architectures drive the longest molecular dynamics (MD) simulations towards the millisecond scale, traditional sequential post-simulation data analysis methods are becoming increasingly untenable. Inspired by the programming interface of Google’s MapReduce, we have built a new parallel analysis framework called HiMach, which allows users to write trajectory analysis programs sequentially, and carries out the parallel execution of the programs automatically. We introduce (1) a new MD trajectory data analysis model that is amenable to parallel processing, (2) a new interface for defining trajectories to be analyzed, (3) a novel method to make use of an existing sequential analysis tool called VMD, and (4) an extension to the original MapReduce model to support multiple rounds of analysis. Performance evaluations on up to 512 cores demonstrate the efficiency and scalability of the HiMach framework on a Linux cluster. I
Price Discovery in the Foreign Currency Futures and Spot Market
In this paper, we compare price discovery in the foreign exchange futures and spot markets during a period in which the spot market was less transparent but had higher volume than the futures market. We develop a foreign exchange futures order flow measure that is a proxy for the order flow observed by Chicago Mercantile Exchange pit traders. We find that both foreign currency futures and spot order flow contain unique information relevant to exchange rate determination. When we measure contributions to price discovery using the methods of Hasbrouck (1995) and Gonzalo and Granger (1995), we obtain results consistent with our order flow findings. Taken together, our evidence suggests that the amount of information contained in currency futures prices in 1996 is much greater than one would expect based on relative market size. Using data from 2006, we obtain quite different results, perhaps because of an increase in spot market transparency. In particular, we find in our more recent sample that the spot market has the dominant information share
The economy-wide impact of the U.S. sugar program
This paper uses a general equilibrium approach to show how the gains and losses from a change in the U.S. sugar program are distributed in the short run. The purpose for undertaking this study was to provide a systematic, economy-wide analysis of the Sugar Program with enough detail to make it useful as a basis for policy discussion. Relaxing the sugar quota does not have uniform effects among economic actors in the sugar sector. Cane and beet production both suffer a decline in price and a corresponding drop in output. Land used in cane and beet production sees an even more dramatic price drop. Cane milling and beet processing production likewise decline. Sugar refining increases its output and its profits. It is estimated that without a quota refining will reach its shortrun productive capacity and begin earning economic rents. The wet corn milling industry would be a loser from the removal of the quota. Going to free trade in sugar causes output to fall about 2%. These results may be more or less severe depending on whether it is assumed that the elasticity of substitution between sweeteners is smaller or larger. Cutting off sugar imports, on the other hand, is predicted to increase shortrun wet milling output and HFCS prices. Using 1982 data, output increases 9% and sweetener price 31%; byproduct prices fall 31%. Incorporating 1988 assumptions causes output at autarky to rise 5% and HFCS price 15%; byproduct price falls 20%. Thus it seems that the HFCS industry can gain a great deal by a tighter quota, but would not be greatly hurt by its removal. The quota\u27s impact on grain producers does not match that of the wet corn milling industry. No amount of sugar quota intervention is expected to significantly change the price of feed grains (including corn) in the short run. Domestic consumers would be better off without the quota. In addition to not having to pay 1.59 billion in total
Recommended from our members
Performance Characteristics of an Adaptive Mesh Refinement Calculation on Scalar and Vector Platforms
Adaptive mesh refinement (AMR) is a powerful technique that reduces the resources necessary to solve otherwise in-tractable problems in computational science. The AMR strategy solves the problem on a relatively coarse grid, and dynamically refines it in regions requiring higher resolution. However, AMR codes tend to be far more complicated than their uniform grid counterparts due to the software infrastructure necessary to dynamically manage the hierarchical grid framework. Despite this complexity, it is generally believed that future multi-scale applications will increasingly rely on adaptive methods to study problems at unprecedented scale and resolution. Recently, a new generation of parallel-vector architectures have become available that promise to achieve extremely high sustained performance for a wide range of applications, and are the foundation of many leadership-class computing systems worldwide. It is therefore imperative to understand the tradeoffs between conventional scalar and parallel-vector platforms for solving AMR-based calculations. In this paper, we examine the HyperCLaw AMR framework to compare and contrast performance on the Cray X1E, IBM Power3 and Power5, and SGI Altix. To the best of our knowledge, this is the first work that investigates and characterizes the performance of an AMR calculation on modern parallel-vector systems