249 research outputs found
Virtualizing the Stampede2 Supercomputer with Applications to HPC in the Cloud
Methods developed at the Texas Advanced Computing Center (TACC) are described
and demonstrated for automating the construction of an elastic, virtual cluster
emulating the Stampede2 high performance computing (HPC) system. The cluster
can be built and/or scaled in a matter of minutes on the Jetstream self-service
cloud system and shares many properties of the original Stampede2, including:
i) common identity management, ii) access to the same file systems, iii)
equivalent software application stack and module system, iv) similar job
scheduling interface via Slurm.
We measure time-to-solution for a number of common scientific applications on
our virtual cluster against equivalent runs on Stampede2 and develop an
application profile where performance is similar or otherwise acceptable. For
such applications, the virtual cluster provides an effective form of "cloud
bursting" with the potential to significantly improve overall turnaround time,
particularly when Stampede2 is experiencing long queue wait times. In addition,
the virtual cluster can be used for test and debug without directly impacting
Stampede2. We conclude with a discussion of how science gateways can leverage
the TACC Jobs API web service to incorporate this cloud bursting technique
transparently to the end user.Comment: 6 pages, 0 figures, PEARC '18: Practice and Experience in Advanced
Research Computing, July 22--26, 2018, Pittsburgh, PA, US
BEACON: A Cloud Network Federation Framework
This paper presents the BEACON Framework, which will enable the provision and management of cross-site virtual networks for federated cloud infrastructures in order to support the automated deployment of applications and services across different clouds and datacenters. The proposed framework will support different federation architectures, going from tightly coupled (datacenter federation) to loosely coupled (cloud federation and multi-cloud orchestration) architectures, and will enable the creation of Layer 2 and Layer 3 overlay networks to interconnect remote resources located at different cloud sites. A high level description of the main components of the BEACON framework is also introduced
Developing sustainability pathways for social simulation tools and services
The use of cloud technologies to teach agent-based modelling and simulation (ABMS) is an interesting application of a nascent technological paradigm that has received very little attention in the literature. This report fills that gap and aims to help instructors, teachers and demonstrators to understand why and how cloud services are appropriate solutions to common problems they face delivering their study programmes, as well as outlining the many cloud options available. The report first introduces social simulation and considers how social simulation is taught. Following this factors affecting the implementation of agent-based models are explored, with attention focused primarily on the modelling and execution platforms currently available, the challenges associated with implementing agent-based models, and the technical architectures that can be used to support the modelling, simulation and teaching process. This sets the context for an extended discussion on cloud computing including service and deployment models, accessing cloud resources, the financial implications of adopting the cloud, and an introduction to the evaluation of cloud services within the context of developing, executing and teaching agent-based models
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
A look at cloud architecture interoperability through standards
Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed
Challenges for the comprehensive management of cloud services in a PaaS framework
The 4CaaSt project aims at developing a PaaS framework that enables flexible definition, marketing, deployment and management of Cloud-based services and applications. The major innovations proposed by 4CaaSt are the blueprint and its lifecycle management, a one stop shop for Cloud services and a PaaS level resource management featuring elasticity. 4CaaSt also provides a portfolio of ready to use Cloud native services and Cloud-aware immigrant technologies
AN EFFICIENT APPROACH TO IMPLEMENT FEDERATED CLOUDS
Cloud computing is one of the trending technologies that provide boundless virtualized resources to the internet users as an important services through the internet, while providing the privacy and security. By using these cloud services, internet users get many parallel computing resources at low cost. It predicted that till 2016, revenues from the online business management spent $4 billion for data storage. Cloud is an open source platform structure, so it is having more chances to malicious attacks. Privacy, confidentiality, and security of stored data are primary security challenges in cloud computing. In cloud computing, ‘virtualization' is one of the techniques dividing memory into different blocks. In most of the existing systems there is only single authority in the system to provide the encrypted keys. To fill the few security issues, this paper proposed a novel authenticated trust security model for secure virtualization system to encrypt the files. The proposed security model achieves the following functions: 1) allotting the VSM(VM Security Monitor) model for each virtual machine; 2) providing secret keys to encrypt and decrypt information by symmetric encryption.The contribution is a proposed architecture that provides a workable security that a cloud service provider can offer to its consumers. Detailed analysis and architecture design presented to elaborate security model
- …