18 research outputs found
Towards a Swiss National Research Infrastructure
In this position paper we describe the current status and plans for a Swiss
National Research Infrastructure. Swiss academic and research institutions are
very autonomous. While being loosely coupled, they do not rely on any
centralized management entities. Therefore, a coordinated national research
infrastructure can only be established by federating the various resources
available locally at the individual institutions. The Swiss Multi-Science
Computing Grid and the Swiss Academic Compute Cloud projects serve already a
large number of diverse user communities. These projects also allow us to test
the operational setup of such a heterogeneous federated infrastructure
Automated cloud bursting on a hybrid cloud platform
Hybrid cloud technology is becoming increasingly popular as it merges private and public cloud to bring the best of two worlds together. However, due to the heterogeneous cloud installation, facilitating a hybrid cloud setup is not simple. In this thesis, Apache Mesos is used to abstract resources in an attempt to build a hybrid cloud on multiple cloud platforms, private and public. Viable setups for increasing the availability of the hybrid cloud are evaluated, as well as the feasibility and suitability of data segmentation. Additionally an automated cloud bursting solution is outlined and implementation has been done in an attempt to dynamically scale the hybrid cloud solution to temporarily expand the resource pool available in the hybrid cloud platform using spot price instances to maximize economical efficiency. The thesis presents functional and viable solutions with respect to availability, segmentation and automated cloud bursting for a hybrid cloud platform. However, further work remains to be done to further improve and confirm the outlined solution, in particular a performance analysis of the proposed solutions
Self-managed Cost-efficient Virtual Elastic Clusters on Hybrid Cloud Infrastructures
In this study, we describe the further development of Elastic Cloud Computing Cluster (EC3), a tool for
creating self-managed cost-efficient virtual hybrid elastic clusters on top of Infrastructure as a Service
(IaaS) clouds. By using spot instances and checkpointing techniques, EC3 can significantly reduce the total
execution cost as well as facilitating automatic fault tolerance. Moreover, EC3 can deploy and manage
hybrid clusters across on-premises and public cloud resources, thereby introducing cloud bursting
capabilities. We present the results of a case study that we conducted to assess the effectiveness of the
tool based on the structural dynamic analysis of buildings. In addition, we evaluated the checkpointing
algorithms in a real cloud environment with existing workloads to study their effectiveness. The results
demonstrate the feasibility and benefits of this type of cluster for computationally intensive applications.
© 2016 Elsevier B.V. All rights reserved.This study was supported by the program "Ayudas para la contratacion de personal investigador en formacion de caracter pre doctoral, programa VALi+d" under grant number ACIF/2013/003 from the Conselleria d'Educacio of the Generalitat Valenciana. We are also grateful for financial support received from The Spanish Ministry of Economy and Competitiveness to develop the project "CLUVIEM" under grant reference TIN2013-44390-R. Finally, we express our gratitude to D. David Ruzafa for support with the arduous task of analyzing the executions data.Calatrava Arroyo, A.; Romero Alcalde, E.; Moltó Martínez, G.; Caballer Fernández, M.; Alonso Ábalos, JM. (2016). Self-managed Cost-efficient Virtual Elastic Clusters on Hybrid Cloud Infrastructures. Future Generation Computer Systems. 61:13-25. https://doi.org/10.1016/j.future.2016.01.018S13256
B3: Fuzzy-Based Data Center Load Optimization in Cloud Computing
Cloud computing started a new era in getting variety of information puddles through various internet connections by any connective devices. It provides pay and use method for grasping the services by the clients. Data center is a sophisticated high definition server, which runs applications virtually in cloud computing. It moves the application, services, and data to a large data center. Data center provides more service level, which covers maximum of users. In order to find the overall load efficiency, the utilization service in data center is a definite task. Hence, we propose a novel method to find the efficiency of the data center in cloud computing. The goal is to optimize date center utilization in terms of three big factors—Bandwidth, Memory, and Central Processing Unit (CPU) cycle. We constructed a fuzzy expert system model to obtain maximum Data Center Load Efficiency (DCLE) in cloud computing environments. The advantage of the proposed system lies in DCLE computing. While computing, it allows regular evaluation of services to any number of clients. This approach indicates that the current cloud needs an order of magnitude in data center management to be used in next generation computing
On Evaluating Commercial Cloud Services: A Systematic Review
Background: Cloud Computing is increasingly booming in industry with many
competing providers and services. Accordingly, evaluation of commercial Cloud
services is necessary. However, the existing evaluation studies are relatively
chaotic. There exists tremendous confusion and gap between practices and theory
about Cloud services evaluation. Aim: To facilitate relieving the
aforementioned chaos, this work aims to synthesize the existing evaluation
implementations to outline the state-of-the-practice and also identify research
opportunities in Cloud services evaluation. Method: Based on a conceptual
evaluation model comprising six steps, the Systematic Literature Review (SLR)
method was employed to collect relevant evidence to investigate the Cloud
services evaluation step by step. Results: This SLR identified 82 relevant
evaluation studies. The overall data collected from these studies essentially
represent the current practical landscape of implementing Cloud services
evaluation, and in turn can be reused to facilitate future evaluation work.
Conclusions: Evaluation of commercial Cloud services has become a world-wide
research topic. Some of the findings of this SLR identify several research gaps
in the area of Cloud services evaluation (e.g., the Elasticity and Security
evaluation of commercial Cloud services could be a long-term challenge), while
some other findings suggest the trend of applying commercial Cloud services
(e.g., compared with PaaS, IaaS seems more suitable for customers and is
particularly important in industry). This SLR study itself also confirms some
previous experiences and reveals new Evidence-Based Software Engineering (EBSE)
lessons
Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines
(VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical
resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must
optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between
the conflicting requirements on performance and operational costs. In recent years, several algorithms have been
proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable
because of subtle differences in the used problem models. This paper surveys the used problem formulations and
optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further
research in the future
Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures
One of the significant shifts of the next-generation computing technologies will certainly be in
the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD
landmark, evolved as a widely deployed BD operating system. Its new features include
federation structure and many associated frameworks, which provide Hadoop 3.x with the
maturity to serve different markets. This dissertation addresses two leading issues involved in
exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely,
(i)Scalability that directly affects the system performance and overall throughput using
portable Docker containers. (ii) Security that spread the adoption of data protection practices
among practitioners using access controls. An Enhanced Mapreduce Environment (EME),
OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker
(BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for
data streaming to the cloud computing are the main contribution of this thesis study
Hybrid High Performance Computing (HPC) + Cloud for Scientific Computing
The HPC+Cloud framework has been built to enable on-premise HPC jobs to use resources from cloud computing nodes. As part of designing the software framework, public cloud providers, namely Amazon AWS, Microsoft Azure and NeCTAR were benchmarked against one another, and Microsoft Azure was determined to be the most suitable cloud component in the proposed HPC+Cloud software framework. Finally, an HPC+Cloud cluster was built using the HPC+Cloud software framework and then was validated by conducting HPC processing benchmarks