389 research outputs found

    Enabling EASEY deployment of containerized applications for future HPC systems

    Full text link
    The upcoming exascale era will push the changes in computing architecture from classical CPU-based systems in hybrid GPU-heavy systems with much higher levels of complexity. While such clusters are expected to improve the performance of certain optimized HPC applications, it will also increase the difficulties for those users who have yet to adapt their codes or are starting from scratch with new programming paradigms. Since there are still no comprehensive automatic assistance mechanisms to enhance application performance on such systems, we are proposing a support framework for future HPC architectures, called EASEY (Enable exASclae for EverYone). The solution builds on a layered software architecture, which offers different mechanisms on each layer for different tasks of tuning. This enables users to adjust the parameters on each of the layers, thereby enhancing specific characteristics of their codes. We introduce the framework with a Charliecloud-based solution, showcasing the LULESH benchmark on the upper layers of our framework. Our approach can automatically deploy optimized container computations with negligible overhead and at the same time reduce the time a scientist needs to spent on manual job submission configurations.Comment: International Conference on Computational Science ICCS2020, 13 page

    Synchronization Landscapes in Small-World-Connected Computer Networks

    Full text link
    Motivated by a synchronization problem in distributed computing we studied a simple growth model on regular and small-world networks, embedded in one and two-dimensions. We find that the synchronization landscape (corresponding to the progress of the individual processors) exhibits Kardar-Parisi-Zhang-like kinetic roughening on regular networks with short-range communication links. Although the processors, on average, progress at a nonzero rate, their spread (the width of the synchronization landscape) diverges with the number of nodes (desynchronized state) hindering efficient data management. When random communication links are added on top of the one and two-dimensional regular networks (resulting in a small-world network), large fluctuations in the synchronization landscape are suppressed and the width approaches a finite value in the large system-size limit (synchronized state). In the resulting synchronization scheme, the processors make close-to-uniform progress with a nonzero rate without global intervention. We obtain our results by ``simulating the simulations", based on the exact algorithmic rules, supported by coarse-grained arguments.Comment: 20 pages, 22 figure

    A Semantic-Based Approach to Attain Reproducibility of Computational Environments in Scientic Work ows: A Case Study

    Get PDF
    Reproducible research in scientic work ows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and nal results, improves understanding, and permits replaying a work ow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We dene a process for documenting the work ow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation sing a real work ow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predened virtual machine image on both computing platforms

    The protective effect of 1-methyltryptophan isomers in renal ischemia-reperfusion injury is not exclusively dependent on indolamine 2,3-dioxygenase inhibition

    Get PDF
    BACKGROUND AND PURPOSE: Indolamine 2,3-dioxygenase (IDO), an enzyme that catalyses the metabolism of tryptophan, may play a detrimental role in ischemia-reperfusion injury (IRI). IDO can be inhibited by 1-methyl-tryptophan, which exists in a D (D-MT) or L (L-MT) isomer. These forms show different pharmacological effects besides IDO inhibition. Therefore, we sought to investigate whether these isomers can play a protective role in renal IRI, either IDO-dependent or independent. EXPERIMENTAL APPROACH: We studied the effect of both isomers in a rat renal IRI model with a focus on IDO-dependent and independent effects. KEY RESULTS: Both MT isomers reduced creatinine and BUN levels, with D-MT having a faster onset of action but shorter duration and L-MT a slower onset but longer duration (24 h and 48 h vs 48 h and 96 h reperfusion time). Interestingly, this effect was not exclusively dependent on IDO inhibition, but rather from decreased TLR4 signalling, mimicking changes in renal function. Additionally, L-MT increased the overall survival of rats. Moreover, both MT isomers interfered with TGF-β signalling and epithelial-mesenchymal transition. In order to study the effect of isomers in all mechanisms involved in IRI, a series of in vitro experiments was performed. The isomers affected signalling pathways in NK cells and tubular epithelial cells, as well as in dendritic cells and T cells. CONCLUSION AND IMPLICATIONS: This study shows that both MT isomers have a renoprotective effect after ischemia-reperfusion injury, mostly independent of IDO inhibition, involving mutually different mechanisms. We bring novel findings in the pharmacological properties and mechanism of action of MT isomers, which could become a novel therapeutic target of renal IRI

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0
    • …
    corecore