493 research outputs found
A Cost-Benefit Study of Doing Astrophysics On The Cloud: Production of Image Mosaics
Utility grids such as the Amazon EC2 and Amazon S3 clouds offer computational and storage resources that can be used on-demand for a fee by compute- and data-intensive applications. The cost of running an application on such a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs. We studied via simulation the cost performance trade-offs of different execution and resource provisioning plans by creating, under the Amazon cloud fee structure, mosaics with the Montage image mosaic engine, a widely used data- and compute-intensive application. Specifically, we studied the cost of building mosaics of 2MASS data that have sizes of 1, 2 and 4 square degrees, and a 2MASS all-sky mosaic. These are examples of mosaics commonly generated by astronomers. We also study these trade-offs in the context of the storage and communication fees of Amazon S3 when used for long-term application data archiving. Our results show that by provisioning the right amount of storage and compute resources cost can be significantly reduced with no significant impact on application performance
Recommended from our members
FABRIC: A National-Scale Programmable Experimental Network Infrastructure
FABRIC is a unique national research infrastructure to enable cutting-edge and exploratory research at-scale in networking, cybersecurity, distributed computing and storage systems, machine learning, and science applications. It is an everywhere-programmable nationwide instrument comprised of novel extensible network elements equipped with large amounts of compute and storage, interconnected by high speed, dedicated optical links. It will connect a number of specialized testbeds for cloud research (NSF Cloud testbeds CloudLab and Chameleon), for research beyond 5G technologies (Platforms for Advanced Wireless Research or PAWR), as well as production high-performance computing facilities and science instruments to create a rich fabric for a wide variety of experimental activities
Enabling EASEY deployment of containerized applications for future HPC systems
The upcoming exascale era will push the changes in computing architecture
from classical CPU-based systems in hybrid GPU-heavy systems with much higher
levels of complexity. While such clusters are expected to improve the
performance of certain optimized HPC applications, it will also increase the
difficulties for those users who have yet to adapt their codes or are starting
from scratch with new programming paradigms. Since there are still no
comprehensive automatic assistance mechanisms to enhance application
performance on such systems, we are proposing a support framework for future
HPC architectures, called EASEY (Enable exASclae for EverYone). The solution
builds on a layered software architecture, which offers different mechanisms on
each layer for different tasks of tuning. This enables users to adjust the
parameters on each of the layers, thereby enhancing specific characteristics of
their codes. We introduce the framework with a Charliecloud-based solution,
showcasing the LULESH benchmark on the upper layers of our framework. Our
approach can automatically deploy optimized container computations with
negligible overhead and at the same time reduce the time a scientist needs to
spent on manual job submission configurations.Comment: International Conference on Computational Science ICCS2020, 13 page
Synchronization Landscapes in Small-World-Connected Computer Networks
Motivated by a synchronization problem in distributed computing we studied a
simple growth model on regular and small-world networks, embedded in one and
two-dimensions. We find that the synchronization landscape (corresponding to
the progress of the individual processors) exhibits Kardar-Parisi-Zhang-like
kinetic roughening on regular networks with short-range communication links.
Although the processors, on average, progress at a nonzero rate, their spread
(the width of the synchronization landscape) diverges with the number of nodes
(desynchronized state) hindering efficient data management. When random
communication links are added on top of the one and two-dimensional regular
networks (resulting in a small-world network), large fluctuations in the
synchronization landscape are suppressed and the width approaches a finite
value in the large system-size limit (synchronized state). In the resulting
synchronization scheme, the processors make close-to-uniform progress with a
nonzero rate without global intervention. We obtain our results by ``simulating
the simulations", based on the exact algorithmic rules, supported by
coarse-grained arguments.Comment: 20 pages, 22 figure
A Semantic-Based Approach to Attain Reproducibility of Computational Environments in Scientic Work ows: A Case Study
Reproducible research in scientic work ows is often addressed by tracking the provenance of the produced results. While this approach allows inspecting intermediate and nal results, improves understanding, and permits replaying a work ow execution, it does not ensure that the computational environment is available for subsequent executions to reproduce the experiment. In this work, we propose describing the resources involved in the execution of an experiment using a set of semantic vocabularies, so as to conserve the computational environment. We dene a process for documenting the work ow application, management system, and their dependencies based on 4 domain ontologies. We then conduct an experimental evaluation sing a real work ow application on an academic and a public Cloud platform. Results show that our approach can reproduce an equivalent execution environment of a predened virtual machine image on both computing platforms
The protective effect of 1-methyltryptophan isomers in renal ischemia-reperfusion injury is not exclusively dependent on indolamine 2,3-dioxygenase inhibition
BACKGROUND AND PURPOSE: Indolamine 2,3-dioxygenase (IDO), an enzyme that catalyses the metabolism of tryptophan, may play a detrimental role in ischemia-reperfusion injury (IRI). IDO can be inhibited by 1-methyl-tryptophan, which exists in a D (D-MT) or L (L-MT) isomer. These forms show different pharmacological effects besides IDO inhibition. Therefore, we sought to investigate whether these isomers can play a protective role in renal IRI, either IDO-dependent or independent. EXPERIMENTAL APPROACH: We studied the effect of both isomers in a rat renal IRI model with a focus on IDO-dependent and independent effects. KEY RESULTS: Both MT isomers reduced creatinine and BUN levels, with D-MT having a faster onset of action but shorter duration and L-MT a slower onset but longer duration (24 h and 48 h vs 48 h and 96 h reperfusion time). Interestingly, this effect was not exclusively dependent on IDO inhibition, but rather from decreased TLR4 signalling, mimicking changes in renal function. Additionally, L-MT increased the overall survival of rats. Moreover, both MT isomers interfered with TGF-β signalling and epithelial-mesenchymal transition. In order to study the effect of isomers in all mechanisms involved in IRI, a series of in vitro experiments was performed. The isomers affected signalling pathways in NK cells and tubular epithelial cells, as well as in dendritic cells and T cells. CONCLUSION AND IMPLICATIONS: This study shows that both MT isomers have a renoprotective effect after ischemia-reperfusion injury, mostly independent of IDO inhibition, involving mutually different mechanisms. We bring novel findings in the pharmacological properties and mechanism of action of MT isomers, which could become a novel therapeutic target of renal IRI
- …