17,463 research outputs found

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    High-Dimensional Dependency Structure Learning for Physical Processes

    Full text link
    In this paper, we consider the use of structure learning methods for probabilistic graphical models to identify statistical dependencies in high-dimensional physical processes. Such processes are often synthetically characterized using PDEs (partial differential equations) and are observed in a variety of natural phenomena, including geoscience data capturing atmospheric and hydrological phenomena. Classical structure learning approaches such as the PC algorithm and variants are challenging to apply due to their high computational and sample requirements. Modern approaches, often based on sparse regression and variants, do come with finite sample guarantees, but are usually highly sensitive to the choice of hyper-parameters, e.g., parameter λ\lambda for sparsity inducing constraint or regularization. In this paper, we present ACLIME-ADMM, an efficient two-step algorithm for adaptive structure learning, which estimates an edge specific parameter λij\lambda_{ij} in the first step, and uses these parameters to learn the structure in the second step. Both steps of our algorithm use (inexact) ADMM to solve suitable linear programs, and all iterations can be done in closed form in an efficient block parallel manner. We compare ACLIME-ADMM with baselines on both synthetic data simulated by partial differential equations (PDEs) that model advection-diffusion processes, and real data (50 years) of daily global geopotential heights to study information flow in the atmosphere. ACLIME-ADMM is shown to be efficient, stable, and competitive, usually better than the baselines especially on difficult problems. On real data, ACLIME-ADMM recovers the underlying structure of global atmospheric circulation, including switches in wind directions at the equator and tropics entirely from the data.Comment: 21 pages, 8 figures, International Conference on Data Mining 201

    NASA Thesaurus supplement: A four part cumulative supplement to the 1988 edition of the NASA Thesaurus (supplement 3)

    Get PDF
    The four-part cumulative supplement to the 1988 edition of the NASA Thesaurus includes the Hierarchical Listing (Part 1), Access Vocabulary (Part 2), Definitions (Part 3), and Changes (Part 4). The semiannual supplement gives complete hierarchies and accepted upper/lowercase forms for new terms

    Methodology for tidal turbine representation in ocean circulation model

    Get PDF
    The present method proposes the use and adaptation of ocean circulation models as an assessment tool framework for tidal current turbine (TCT) array layout optimization. By adapting both momentum and turbulence transport equations of an existing model, the present TCT representation method is proposed to extend the actuator disc concept to 3-D large-scale ocean circulation models. Through the reproduction of experimental flume tests and grid dependency tests, this method has shown its numerical coherence as well as its ability to simulate accurately both momentum and turbulent turbine-induced perturbations in both near and far wakes in a relatively short period of computation time. Consequently the present TCT representation method is a very promising basis for the development of a TCT array layout optimization tool

    A Performance Evaluation Method for Climate Coupled Models

    Get PDF
    In the High Performance Computing context, the performance evaluation of a parallel algorithm is carried out mainly by considering the elapsed time for running the parallel application with both different number of cores and different problem sizes (for scaled speedup). Typically, parallel applications embed mechanisms to efficiently use the allocated resources, guaranteeing for example a good load balancing and reducing the parallel overhead. Unfortunately, this assumption is not true for coupled models. These models were born from the coupling of stand-alone climate models. The component models are developed independently from each other and they follow different development roadmaps. Moreover, they are characterized by different levels of parallelization as well as different requirements in terms of workload and they have their own scalability curve. Considering a coupled model as a single parallel application, we can note the lacking of a policy useful to balance the computational load on the available resources. This work tries to address the issues related to the performance evaluation of a coupled model as well as answering the following questions: once a given number of processors has been allocated for the whole coupled model, how does the run have to be configured in order to balance the workload? How many processors must be assigned to each of the component models? The methodology here described has been applied to evaluate the scalability of the CMCC-MED coupled model designed by the ANS Division of the CMCC. The evaluation has been carried out on two different computational architectures: a scalar cluster, based on IBM Power6 processors, and a vector cluster, based on NEC-SX9 processors
    • …
    corecore