515 research outputs found
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
Autonomous management of cost, performance, and resource uncertainty for migration of applications to infrastructure-as-a-service (IaaS) clouds
2014 Fall.Includes bibliographical references.Infrastructure-as-a-Service (IaaS) clouds abstract physical hardware to provide computing resources on demand as a software service. This abstraction leads to the simplistic view that computing resources are homogeneous and infinite scaling potential exists to easily resolve all performance challenges. Adoption of cloud computing, in practice however, presents many resource management challenges forcing practitioners to balance cost and performance tradeoffs to successfully migrate applications. These challenges can be broken down into three primary concerns that involve determining what, where, and when infrastructure should be provisioned. In this dissertation we address these challenges including: (1) performance variance from resource heterogeneity, virtualization overhead, and the plethora of vaguely defined resource types; (2) virtual machine (VM) placement, component composition, service isolation, provisioning variation, and resource contention for multitenancy; and (3) dynamic scaling and resource elasticity to alleviate performance bottlenecks. These resource management challenges are addressed through the development and evaluation of autonomous algorithms and methodologies that result in demonstrably better performance and lower monetary costs for application deployments to both public and private IaaS clouds. This dissertation makes three primary contributions to advance cloud infrastructure management for application hosting. First, it includes design of resource utilization models based on step-wise multiple linear regression and artificial neural networks that support prediction of better performing component compositions. The total number of possible compositions is governed by Bell's Number that results in a combinatorially explosive search space. Second, it includes algorithms to improve VM placements to mitigate resource heterogeneity and contention using a load-aware VM placement scheduler, and autonomous detection of under-performing VMs to spur replacement. Third, it describes a workload cost prediction methodology that harnesses regression models and heuristics to support determination of infrastructure alternatives that reduce hosting costs. Our methodology achieves infrastructure predictions with an average mean absolute error of only 0.3125 VMs for multiple workloads
Elasticity Measurement in CaaS Environments - Extending the Existing BUNGEE Elasticity Benchmark to AWS\u27s Elastic Container Service
Rapid elasticity and automatic scaling are core concepts of most current cloud computing systems. Elasticity describes how well and how fast cloud systems adapt to increases and decreases in workload. In parallel, software architectures are moving towards employing containerised microservices running on systems managed by container orchestration platforms. Cloud users who employ such container-based systems may want to compare the elasticity of different systems or system settings to ensure rapid elasticity and maintain service level objectives while avoiding over-provisioning. Previous research has established a variety of metrics to measure elasticity. Some existing benchmark tools are designed to measure elasticity in “Infrastructure as a Service” (IaaS) systems, but no research exists to date for measuring elasticity in systems based on containers and container orchestration. In this dissertation, an existing benchmark designed for IaaS systems, the BUNGEE benchmark developed at the University of Würzburg, was extended to be applicable to Amazon’s Elastic Container Service, a container-based cloud system. An experiment was conducted to test if the extension of the BUNGEE benchmark described in this dissertation delivers reproducible results and is therefore valid. For validation, the crucial phase of the benchmark - the system analysis phase - was run 32 times. It was established with statistical tests if the results vary by more than the acceptable level. Results indicate that there is some amount of variability, but it does not exceed the acceptable level and is consistent with the amount of performance variability encountered by other researchers in Amazon’s cloud systems. Therefore, it is concluded that the BUNGEE benchmark is likely applicable to container-based cloud systems. However, some parameters and configuration settings specific to container orchestration systems were identified that could impede reproducibility of results and should be considered in future experiments
Allocation of Virtual Machines in Cloud Data Centers - A Survey of Problem Models and Optimization Algorithms
Data centers in public, private, and hybrid cloud settings make it possible to provision virtual machines
(VMs) with unprecedented flexibility. However, purchasing, operating, and maintaining the underlying physical
resources incurs significant monetary costs and also environmental impact. Therefore, cloud providers must
optimize the usage of physical resources by a careful allocation of VMs to hosts, continuously balancing between
the conflicting requirements on performance and operational costs. In recent years, several algorithms have been
proposed for this important optimization problem. Unfortunately, the proposed approaches are hardly comparable
because of subtle differences in the used problem models. This paper surveys the used problem formulations and
optimization algorithms, highlighting their strengths and limitations, also pointing out the areas that need further
research in the future
Notes on Cloud computing principles
This letter provides a review of fundamental distributed systems and economic
Cloud computing principles. These principles are frequently deployed in their
respective fields, but their inter-dependencies are often neglected. Given that
Cloud Computing first and foremost is a new business model, a new model to sell
computational resources, the understanding of these concepts is facilitated by
treating them in unison. Here, we review some of the most important concepts
and how they relate to each other
Resource management in a containerized cloud : status and challenges
Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research
Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges
In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks
- …