63,722 research outputs found

    Total Exchange Performance Prediction on Grid Environments: modeling and algorithmic issues

    Get PDF
    ISBN 978-0-387-72497-3 (Print) 978-0-387-72498-0 (Online) Copyright : 2008International audienceOne of the most important collective communication patterns used in scientific applications is the complete exchange, also called All-to-All. Although efficient algorithms have been studied for specific networks, general solutions like those available in well-known MPI distributions (e.g. the MPI Alltoall operation) are strongly influenced by the congestion of network resources. In this paper we address the problem of modeling the performance of Total Exchange communication operations in grid environments. Because traditional performance models are unable to predict the real completion time of an All-to-All operation, we try to cope with this problem by identifying the factors that can interfere in both local and distant transmissions. We observe that the traditional MPI Alltoall implementation is not suited for grid environments, as it is both inefficient and hard to model. We focus therefore in an alternative algorithm for the total exchange redistribution problem. In our approach we perform communications in two different phases, aiming to minimize the number of communication steps through the wide-area network. This reduction has a direct impact on the performance modeling of the MPI Alltoall operation, as we minimize the factors that interfere with wide-area communications. Hence, we are able to define an accurate performance modeling of a total exchange between two clusters

    Closing the loop between neural network simulators and the OpenAI Gym

    Full text link
    Since the enormous breakthroughs in machine learning over the last decade, functional neural network models are of growing interest for many researchers in the field of computational neuroscience. One major branch of research is concerned with biologically plausible implementations of reinforcement learning, with a variety of different models developed over the recent years. However, most studies in this area are conducted with custom simulation scripts and manually implemented tasks. This makes it hard for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. This toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments of varying complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym

    Managing Uncertainty: A Case for Probabilistic Grid Scheduling

    Get PDF
    The Grid technology is evolving into a global, service-orientated architecture, a universal platform for delivering future high demand computational services. Strong adoption of the Grid and the utility computing concept is leading to an increasing number of Grid installations running a wide range of applications of different size and complexity. In this paper we address the problem of elivering deadline/economy based scheduling in a heterogeneous application environment using statistical properties of job historical executions and its associated meta-data. This approach is motivated by a study of six-month computational load generated by Grid applications in a multi-purpose Grid cluster serving a community of twenty e-Science projects. The observed job statistics, resource utilisation and user behaviour is discussed in the context of management approaches and models most suitable for supporting a probabilistic and autonomous scheduling architecture

    Indoor mould growth prediction using coupled computational fluid dynamics and mould growth model

    Get PDF
    This study investigates, using in-situ and numerical simulation experiments, airflow and hygrothermal distribution in a mechanically ventilated academic research facility with known cases of microbial proliferations. Microclimate parameters were obtained from in-situ experiments and used as boundary conditions and validation of the numerical experiments with a commercial computational fluid dynamics (CFD) analysis tool using the standard k–ε model. Good agreements were obtained with less than 10% deviations between the measured and simulated results. Subsequent upon successful validation, the model was used to investigate hygrothermal and airflow profile within the shelves holding stored components in the facility. The predicted in-shelf hygrothermal profile was superimposed on mould growth limiting curve earlier documented in the literature. Results revealed the growth of xerophilic species in most parts of the shelves. The mould growth prediction was found in correlation with the microbial investigation in the case-studied room reported by the authors elsewhere. Satisfactory prediction of mould growth in the room successfully proved that the CFD simulation can be used to investigate the conditions that lead to microbial growth in the indoor environment

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft

    Machine Learning for Observables: Reactant to Product State Distributions for Atom-Diatom Collisions

    Full text link
    Machine learning-based models to predict product state distributions from a distribution of reactant conditions for atom-diatom collisions are presented and quantitatively tested. The models are based on function-, kernel- and grid-based representations of the reactant and product state distributions. While all three methods predict final state distributions from explicit quasi-classical trajectory simulations with R2^2 > 0.998, the grid-based approach performs best. Although a function-based approach is found to be more than two times better in computational performance, the kernel- and grid-based approaches are preferred in terms of prediction accuracy, practicability and generality. The function-based approach also suffers from lacking a general set of model functions. Applications of the grid-based approach to nonequilibrium, multi-temperature initial state distributions are presented, a situation common to energy distributions in hypersonic flows. The role of such models in Direct Simulation Monte Carlo and computational fluid dynamics simulations is also discussed
    • …
    corecore