1,031 research outputs found
A study on performance measures for auto-scaling CPU-intensive containerized applications
Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented
GMonE: a complete approach to cloud monitoring
The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale
environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has
not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered
cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo
Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula
Design and Implementation of Fragmented Clouds for Evaluation of Distributed Databases
In this paper, we present a Fragmented Hybrid Cloud (FHC) that provides a
unified view of multiple geographically distributed private cloud datacenters.
FHC leverages a fragmented usage model in which outsourcing is bi-directional
across private clouds that can be hosted by static and mobile entities. The
mobility aspect of private cloud nodes has important impact on the FHC
performance in terms of latency and network throughput that are reversely
proportional to time-varying distances among different nodes. Mobility also
results in intermittent interruption among computing nodes and network links of
FHC infrastructure. To fully consider mobility and its consequences, we
implemented a layered FHC that leverages Linux utilities and bash-shell
programming. We also evaluated the impact of the mobility of nodes on the
performance of distributed databases as a result of time-varying latency and
bandwidth, downsizing and upsizing cluster nodes, and network accessibility.
The findings from our extensive experiments provide deep insights into the
performance of well-known big data databases, such as Cassandra, MongoDB,
Redis, and MySQL, when deployed on a FHC
Big Data in the Cloud: A Survey
Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future
Data modeling with NoSQL : how, when and why
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201
- …