123 research outputs found
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
Climbing Up Cloud Nine: Performance Enhancement Techniques for Cloud Computing Environments
With the transformation of cloud computing technologies from an attractive trend to a business reality, the need is more pressing than ever for efficient cloud service management tools and techniques. As cloud technologies continue to mature, the service model, resource allocation methodologies, energy efficiency models and general service management schemes are not yet saturated. The burden of making this all tick perfectly falls on cloud providers. Surely, economy of scale revenues and leveraging existing infrastructure and giant workforce are there as positives, but it is far from straightforward operation from that point. Performance and service delivery will still depend on the providers’ algorithms and policies which affect all operational areas.
With that in mind, this thesis tackles a set of the more critical challenges faced by cloud providers with the purpose of enhancing cloud service performance and saving on providers’ cost. This is done by exploring innovative resource allocation techniques and developing novel tools and methodologies in the context of cloud resource management, power efficiency, high availability and solution evaluation.
Optimal and suboptimal solutions to the resource allocation problem in cloud data centers from both the computational and the network sides are proposed. Next, a deep dive into the energy efficiency challenge in cloud data centers is presented. Consolidation-based and non-consolidation-based solutions containing a novel dynamic virtual machine idleness prediction technique are proposed and evaluated. An investigation of the problem of simulating cloud environments follows. Available simulation solutions are comprehensively evaluated and a novel design framework for cloud simulators covering multiple variations of the problem is presented. Moreover, the challenge of evaluating cloud resource management solutions performance in terms of high availability is addressed. An extensive framework is introduced to design high availability-aware cloud simulators and a prominent cloud simulator (GreenCloud) is extended to implement it. Finally, real cloud application scenarios evaluation is demonstrated using the new tool.
The primary argument made in this thesis is that the proposed resource allocation and simulation techniques can serve as basis for effective solutions that mitigate performance and cost challenges faced by cloud providers pertaining to resource utilization, energy efficiency, and client satisfaction
Big Data Application and System Co-optimization in Cloud and HPC Environment
The emergence of big data requires powerful computational resources and memory subsystems that can be scaled efficiently to accommodate its demands. Cloud is a new well-established computing paradigm that can offer customized computing and memory resources to meet the scalable demands of big data applications. In addition, the flexible pay-as-you-go pricing model offers opportunities for using large scale of resources with low cost and no infrastructure maintenance burdens. High performance computing (HPC) on the other hand also has powerful infrastructure that has potential to support big data applications. In this dissertation, we explore the application and system co-optimization opportunities to support big data in both cloud and HPC environments.
Specifically, we explore the unique features of both application and system to seek overlooked optimization opportunities or tackle challenges that are difficult to be addressed by only looking at the application or system individually. Based on the characteristics of the workloads and their underlying systems to derive the optimized deployment and runtime schemes, we divide the workflow into four categories: 1) memory intensive applications; 2) compute intensive applications; 3) both memory and compute intensive applications; 4) I/O intensive applications.When deploying memory intensive big data applications to the public clouds, one important yet challenging problem is selecting a specific instance type whose memory capacity is large enough to prevent out-of-memory errors while the cost is minimized without violating performance requirements. In this dissertation, we propose two techniques for efficient deployment of big data applications with dynamic and intensive memory footprint in the cloud. The first approach builds a performance-cost model that can accurately predict how, and by how much, virtual memory size would slow down the application and consequently, impact the overall monetary cost. The second approach employs a lightweight memory usage prediction methodology based on dynamic meta-models adjusted by the application's own traits. The key idea is to eliminate the periodical checkpointing and migrate the application only when the predicted memory usage exceeds the physical allocation. When applying compute intensive applications to the clouds, it is critical to make the applications scalable so that it can benefit from the massive cloud resources. In this dissertation, we first use the Kirchhoff law, which is one of the most widely used physical laws in many engineering principles, as an example workload for our study. The key challenge of applying the Kirchhoff law to real-world applications at scale lies in the high, if not prohibitive, computational cost to solve a large number of nonlinear equations. In this dissertation, we propose a high-performance deep-learning-based approach for Kirchhoff analysis, namely HDK. HDK employs two techniques to improve the performance: (i) early pruning of unqualified input candidates which simplify the equation and select a meaningful input data range; (ii) parallelization of forward labelling which execute steps of the problem in parallel. When it comes to both memory and compute intensive applications in clouds, we use blockchain system as a benchmark. Existing blockchain frameworks exhibit a technical barrier for many users to modify or test out new research ideas in blockchains. To make it worse, many advantages of blockchain systems can be demonstrated only at large scales, which are not always available to researchers. In this dissertation, we develop an accurate and efficient emulating system to replay the execution of large-scale blockchain systems on tens of thousands of nodes in the cloud. For I/O intensive applications, we observe one important yet often neglected side effect of lossy scientific data compression. Lossy compression techniques have demonstrated promising results in significantly reducing the scientific data size while guaranteeing the compression error bounds, but the compressed data size is often highly skewed and thus impact the performance of parallel I/O. Therefore, we believe it is critical to pay more attention to the unbalanced parallel I/O caused by lossy scientific data compression
Ad hoc cloud computing
Commercial and private cloud providers offer virtualized resources via a set of co-located
and dedicated hosts that are exclusively reserved for the purpose of offering
a cloud service. While both cloud models appeal to the mass market, there are many
cases where outsourcing to a remote platform or procuring an in-house infrastructure
may not be ideal or even possible.
To offer an attractive alternative, we introduce and develop an ad hoc cloud computing
platform to transform spare resource capacity from an infrastructure owner’s
locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud
platform. The foundation of the ad hoc cloud relies on transferring and instantiating
lightweight virtual machines on-demand upon near-optimal hosts while virtual machine
checkpoints are distributed in a P2P fashion to other members of the ad hoc
cloud. Virtual machines found to be non-operational are restored elsewhere ensuring
the continuity of cloud jobs.
In this thesis we investigate the feasibility, reliability and performance of ad hoc
cloud computing infrastructures. We firstly show that the combination of both volunteer
computing and virtualization is the backbone of the ad hoc cloud. We outline the
process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC
distributes virtual machines to volunteer hosts allowing volunteer applications to be
executed in the sandbox environment to solve many of the downfalls of BOINC; this
however also provides the basis for an ad hoc cloud computing platform to be developed.
We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline
the transformational process and integrated extensions. These include a BOINC job
submission system, cloud job and virtual machine restoration schedulers and a periodic
P2P checkpoint distribution component. Furthermore, as current monitoring tools are
unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure
monitoring and management tool called the Cloudlet Control Monitoring System is
developed and presented.
We evaluate each of our individual contributions as well as the reliability, performance
and overheads associated with an ad hoc cloud deployed on a realistically
simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a
feasible concept but also a viable computational alternative that offers high levels of
reliability and can at least offer reasonable performance, which at times may exceed
the performance of a commercial cloud infrastructure
Performance modelling and optimization for video-analytic algorithms in a cloud-like environment using machine learning
CCTV cameras produce a large amount of video surveillance data per day, and
analysing them require the use of significant computing resources that often need to be scalable. The emergence of the Hadoop distributed processing framework has had a significant impact on various data intensive applications as the distributed computed based processing enables an increase of the processing capability of applications it serves. Hadoop is an open source implementation of the MapReduce
programming model. It automates the operation of creating tasks for each
function, distribute data, parallelize executions and handles machine failures that reliefs users from the complexity of having to manage the underlying processing and only focus on building their application. It is noted that in a practical deployment the challenge of Hadoop based architecture is that it requires several scalable machines for effective processing, which in turn adds hardware investment cost to the infrastructure. Although using a cloud infrastructure offers scalable and elastic utilization of resources where users can scale up or scale down the number of Virtual Machines (VM) upon requirements, a user such as a CCTV system operator intending to use a public cloud would aspire to know what cloud resources (i.e. number of VMs) need to be deployed
so that the processing can be done in the fastest (or within a known time
constraint) and the most cost effective manner. Often such resources will also
have to satisfy practical, procedural and legal requirements. The capability to
model a distributed processing architecture where the resource requirements can
be effectively and optimally predicted will thus be a useful tool, if available. In
literature there is no clear and comprehensive modelling framework that provides
proactive resource allocation mechanisms to satisfy a user's target requirements,
especially for a processing intensive application such as video analytic.
In this thesis, with the hope of closing the above research gap, novel research
is first initiated by understanding the current legal practices and requirements of
implementing video surveillance system within a distributed processing and data
storage environment, since the legal validity of data gathered or processed within
such a system is vital for a distributed system's applicability in such domains.
Subsequently the thesis presents a comprehensive framework for the performance
ii
modelling and optimization of resource allocation in deploying a scalable distributed
video analytic application in a Hadoop based framework, running on virtualized
cluster of machines.
The proposed modelling framework investigates the use of several machine
learning algorithms such as, decision trees (M5P, RepTree), Linear Regression,
Multi Layer Perceptron(MLP) and the Ensemble Classifier Bagging model, to
model and predict the execution time of video analytic jobs, based on infrastructure
level as well as job level parameters. Further in order to propose a novel
framework for the allocate resources under constraints to obtain optimal performance
in terms of job execution time, we propose a Genetic Algorithms (GAs) based
optimization technique.
Experimental results are provided to demonstrate the proposed framework's
capability to successfully predict the job execution time of a given video analytic task based on infrastructure and input data related parameters and its ability determine the minimum job execution time, given constraints of these parameters.
Given the above, the thesis contributes to the state-of-art in distributed video
analytics, design, implementation, performance analysis and optimisation
Partitioning workflow applications over federated clouds to meet non-functional requirements
PhD ThesisWith cloud computing, users can acquire computer resources when they need them
on a pay-as-you-go business model. Because of this, many applications are now being
deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly,
all these various infrastructure providers o er services with di erent levels
of quality. For example, cloud data centres are governed by the privacy and security
policies of the country where the centre is located, while many organisations have
created their own internal \private cloud" to meet security needs.
With all this varieties and uncertainties, application developers who decide to host their
system in the cloud face the issue of which cloud to choose to get the best operational
conditions in terms of price, reliability and security. And the decision becomes even
more complicated if their application consists of a number of distributed components,
each with slightly di erent requirements.
Rather than trying to identify the single best cloud for an application, this thesis
considers an alternative approach, that is, combining di erent clouds to meet users'
non-functional requirements. Cloud federation o ers the ability to distribute a single
application across two or more clouds, so that the application can bene t from the
advantages of each one of them. The key challenge for this approach is how to nd the
distribution (or deployment) of application components, which can yield the greatest
bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a
framework, to partition a work
ow-based application over federated clouds in order to
exploit the strengths of each cloud. The speci c goal is to split a distributed application
structured as a work
ow such that the security and reliability requirements of each
component are met, whilst the overall cost of execution is minimised.
To achieve this, we propose and evaluate a cloud broker for partitioning a work
ow
application over federated clouds. The broker integrates with the e-Science Central
cloud platform to automatically deploy a work
ow over public and private clouds.
We developed a deployment planning algorithm to partition a large work
ow appli-
- i -
cation across federated clouds so as to meet security requirements and minimise the
monetary cost.
A more generic framework is then proposed to model, quantify and guide the partitioning
and deployment of work
ows over federated clouds. This framework considers
the situation where changes in cloud availability (including cloud failure) arise during
work
ow execution
Recommended from our members
SpotLight: An Information Service for the Cloud
Infrastructure-as-a-Service cloud platforms are incredibly complex: they rent hundreds of different types of servers across multiple geographical regions under a wide range of contract types that offer varying tradeoffs between risk and cost. Unfortunately, the internal dynamics of cloud platforms are opaque in several dimensions. For example, while the risk of servers not being available when requested is critical in optimizing these risk-cost tradeoffs, it is not typically made visible to users. Thus, inspired by prior work on Internet bandwidth probing, we propose actively probing cloud platforms to explicitly learn such information, where each probe\u27\u27 is a request for a particular type of server. We model the relationships between different contracts types to develop a market-based probing policy, which leverages the insight that real-time prices in cloud spot markets loosely correlate with the supply (and availability) of fixed-price on-demand servers. That is, the higher the spot price for a server, the more likely the corresponding fixed-price on-demand server is not available. We incorporate market-based probing into SpotLight, an information service that enables cloud applications to query this and other data, and use it to monitor the availability of more than 4500 distinct server types across 9 geographical regions in Amazon\u27s Elastic Compute Cloud over a 3 month period. We analyze this data to reveal interesting observations about the platform\u27s internal dynamics. We then show how SpotLight enables two recently proposed derivative cloud services to select a better mix of servers to host applications, which improves their availability from 70-90% to near 100% in practice
- …