1,930 research outputs found
An SOA-based model for the integrated provisioning of cloud and grid resources
In the last years, the availability and models of use of networked computing resources within reach of e-Science are rapidly changing and see the coexistence of many disparate paradigms: high-performance computing, grid, and recently cloud. Unfortunately, none of these paradigms is recognized as the ultimate solution, and a convergence of them all should be pursued. At the same time, recent works have proposed a number of models and tools to address the growing needs and expectations in the field of e-Science. In particular, they have shown the advantages and the feasibility of modeling e-Science environments and infrastructures according to the service-oriented architecture. In this paper, we suggest a model to promote the convergence and the integration of the different computing paradigms and infrastructures for the dynamic on-demand provisioning of resources from multiple providers as a cohesive aggregate, leveraging the service-oriented architecture. In addition, we propose a design aimed at endorsing a flexible, modular, workflow-based computing model for e-Science.
The model is supplemented by a working prototype implementation together with a case study in the applicative domain of bioinformatics, which is used to validate the presented approach and to carry out some performance and scalability measurements
A complete and efficient CUDA-sharing solution for HPC clusters
In this paper we detail the key features, architectural design, and implementation of rCUDA,
an advanced framework to enable remote and transparent GPGPU acceleration in HPC
clusters. rCUDA allows decoupling GPUs from nodes, forming pools of shared accelerators,
which brings enhanced flexibility to cluster configurations. This opens the door to configurations
with fewer accelerators than nodes, as well as permits a single node to exploit the
whole set of GPUs installed in the cluster. In our proposal, CUDA applications can seamlessly
interact with any GPU in the cluster, independently of its physical location. Thus,
GPUs can be either distributed among compute nodes or concentrated in dedicated GPGPU
servers, depending on the cluster administrator’s policy. This proposal leads to savings not
only in space but also in energy, acquisition, and maintenance costs. The performance evaluation
in this paper with a series of benchmarks and a production application clearly demonstrates
the viability of this proposal. Concretely, experiments with the matrix–matrix
product reveal excellent performance compared with regular executions on the local
GPU; on a much more complex application, the GPU-accelerated LAMMPS, we attain up
to 11x speedup employing 8 remote accelerators from a single node with respect to a
12-core CPU-only execution. GPGPU service interaction in compute nodes, remote acceleration
in dedicated GPGPU servers, and data transfer performance of similar GPU virtualization
frameworks are also evaluated.
2014 Elsevier B.V. All rights reserved.This work was supported by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2012-38341-004-01. It was also supported by MINECO, FEDER funds, under Grant TIN2011-23283, and by the Fundacion Caixa-Castello Bancaixa, Grant P11B2013-21. This work was also supported in part by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357. Authors are grateful for the generous support provided by Mellanox Technologies to the rCUDA Project. The authors would also like to thank Adrian Castello, member of The rCUDA Development Team, for his hard work on rCUDA.Peña Monferrer, AJ.; Reaño González, C.; Silla Jiménez, F.; Mayo Gual, R.; Quintana-Orti, ES.; Duato MarÃn, JF. (2014). A complete and efficient CUDA-sharing solution for HPC clusters. Parallel Computing. 40(10):574-588. https://doi.org/10.1016/j.parco.2014.09.011S574588401
A Clouded Future: Analysis of Microsoft Windows Azure As a Platform for Hosting E-Science Applications
Microsoft Windows Azure is Microsoft\u27s cloud based platform for hosting .NET applications. Azure provides a simple, cost effective method for outsourcing application hosting. Windows Azure has caught the eye of researchers in e-science who require parallel computing infrastructures to process mountains of data. Windows Azure offers the same benefits to e-science as it does to other industries. This paper examines the technology behind Azure and analyzes two case studies of e-science projects built on the Windows Azure platform
Computing Without Borders: The Way Towards Liquid Computing
Despite the de-facto technological uniformity fostered by the cloud and edge computing paradigms, resource fragmentation across isolated clusters hinders the dynamism in application placement, leading to suboptimal performance and operational complexity. Building upon and extending these paradigms, we propose a novel approach envisioning a transparent continuum of resources and services on top of the underlying fragmented infrastructure, called liquid computing. Fully decentralized, multi-ownership-oriented and intent-driven, it enables an overarching abstraction for improved applications execution, while at the same time opening up for new scenarios, including resource sharing and brokering. Following the above vision, we present liqo, an open-source project that materializes this approach through the creation of dynamic and seamless Kubernetes multi-cluster topologies. Extensive experimental evaluations have shown its effectiveness in different contexts, both in terms of Kubernetes overhead and compared to other open-source alternatives
- …