33 research outputs found
Design and Implementation of Fragmented Clouds for Evaluation of Distributed Databases
In this paper, we present a Fragmented Hybrid Cloud (FHC) that provides a
unified view of multiple geographically distributed private cloud datacenters.
FHC leverages a fragmented usage model in which outsourcing is bi-directional
across private clouds that can be hosted by static and mobile entities. The
mobility aspect of private cloud nodes has important impact on the FHC
performance in terms of latency and network throughput that are reversely
proportional to time-varying distances among different nodes. Mobility also
results in intermittent interruption among computing nodes and network links of
FHC infrastructure. To fully consider mobility and its consequences, we
implemented a layered FHC that leverages Linux utilities and bash-shell
programming. We also evaluated the impact of the mobility of nodes on the
performance of distributed databases as a result of time-varying latency and
bandwidth, downsizing and upsizing cluster nodes, and network accessibility.
The findings from our extensive experiments provide deep insights into the
performance of well-known big data databases, such as Cassandra, MongoDB,
Redis, and MySQL, when deployed on a FHC
Performance evaluation of various deployment scenarios of the 3-replicated Cassandra NoSQL cluster on AWS
A concept of distributed replicated NoSQL data storages Cassandra-like, HBase, MongoDB has been proposed to effectively manage Big Data set whose volume, velocity and variability are difficult to deal with by using the traditional Relational Database Management Systems. Tradeoffs between consistency, availability, partition tolerance and latency is intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP and PACELC theorems in qualitative terms, it is still necessary to quantify how different consistency settings, deployment patterns and other properties affect system performance.This experience report analysis performance of the Cassandra NoSQL database cluster and studies the tradeoff between data consistency guaranties and performance in distributed data storages. The primary focus is on investigating the quantitative interplay between Cassandra response time, throughput and its consistency settings considering different single- and multi-region deployment scenarios. The study uses the YCSB benchmarking framework and reports the results of the read and write performance tests of the three-replicated Cassandra cluster deployed in the Amazon AWS. In this paper, we also put forward a notation which can be used to formally describe distributed deployment of Cassandra cluster and its nodes relative to each other and to a client application. We present quantitative results showing how different consistency settings and deployment patterns affect Cassandra performance under different workloads. In particular, our experiments show that strong consistency costs up to 22 % of performance in case of the centralized Cassandra cluster deployment and can cause a 600 % increase in the read/write requests if Cassandra replicas and its clients are globally distributed across different AWS Regions
Exploring Timeout As A Performance And Availability Factor Of Distributed Replicated Database Systems
A concept of distributed replicated data storages like Cassandra, HBase, MongoDB has been proposed to effectively manage the Big Data sets whose volume, velocity, and variability are difficult to deal with by using the traditional Relational Database Management Systems. Trade-offs between consistency, availability, partition tolerance, and latency are intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP theorem in qualitative terms, it is still necessary to quantify how different consistency and timeout settings affect system latency. The paper reports results of Cassandra's performance evaluation using the YCSB benchmark and experimentally demonstrates how to read latency depends on the consistency settings and the current database workload. These results clearly show that stronger data consistency increases system latency, which is in line with the qualitative implication of the CAP theorem. Moreover, Cassandra latency and its variation considerably depend on the system workload. The distributed nature of such a system does not always guarantee that the client receives a response from the database within a finite time. If this happens, it causes so-called timing failures when the response is received too late or is not received at all. In the paper, we also consider the role of the application timeout which is the fundamental part of all distributed fault tolerance mechanisms working over the Internet and used as the main error detection mechanism here. The role of the application timeout as the main determinant in the interplay between system availability and responsiveness is also examined in the paper. It is quantitatively shown how different timeout settings could affect system availability and the average servicing and waiting time. Although many modern distributed systems including Cassandra use static timeouts it was shown that the most promising approach is to set timeouts dynamically at run time to balance performance, availability and improve the efficiency of the fault-tolerance mechanisms
Evaluating Riak Key Value Cluster for Big Data
NoSQL database has become an important alternative to traditional relational databases. Those databases are prepared by the management of large, continuously and variably changing data sets. They are widely used in cloud databases and distributed systems. With NoSQL databases, static schemes and many other restrictions are avoided. In the era of big data, such databases provide scalable high availability solutions. Their key-value feature allows fast retrieval of data and the ability to store a lot of it. There are many kinds of NoSQL databases with various performances. Therefore, comparing those different types of databases in terms of performance and verifying the relationship between performance and database type has become very important. In this paper, we test and evaluate the Riak key-value database for big data clusters using benchmark tools, where huge amounts of data are stored and retrieved in different sizes in a distributed database environment. Execution times of the NoSQL database over different types of workloads and different sizes of data are compared. The results show that the Riak key-value is stable in execution time for both small and large amounts of data, and the throughput performance increases as the number of threads increases
A Queueing Network Model for Performance Prediction of Apache Cassandra
NoSQL databases such as Apache Cassandra have attracted large interest in recent years thanks to their high availability, scalability, flexibility and low latency. Still there is limited research work on performance engineering methods for NoSQL databases, which yet are needed since these systems are highly distributed and thus can incur significant cost/performance trade-offs. To address this need, we propose a novel queueing network model for the Cassandra NoSQL database aimed at supporting resource provisioning. The model defines explicitly key configuration parameters of Cassandra such as consistency levels and replication factor, allowing engineers to compare alternative system setups. Experimental results based on the YCSB benchmark indicate that, with a small amount of training for the estimation of its input param- eters, the proposed model achieves good predictive accuracy across different loads and consistency levels. The average performance errors of the model compared to the real results are between 6% and 10%. We also demonstrate the applicability of our model to other NoSQL databases and other possible utilisation of it
Interplaying Cassandra NoSQL Consistency and Performance: A Benchmarking Approach
This experience report analyses performance of the Cassandra NoSQL database and studies the fundamental trade-off between data consistency and delays in distributed data storages. The primary focus is on investigating the interplay between the Cassandra performance (response time) and its consistency settings. The paper reports the results of the read and write performance benchmarking for a replicated Cassandra cluster, deployed in the Amazon EC2 Cloud. We present quantitative results showing how different consistency settings affect the Cassandra performance under different workloads. One of our main findings is that it is possible to minimize Cassandra delays and still guarantee the strong data consistency by optimal coordination of consistency settings for both read and write requests. Our experiments show that (i) strong consistency costs up to 25% of performance and (ii) the best setting for strong consistency depends on the ratio of read and write operations. Finally, we generalize our experience by proposing a benchmarking-based methodology for run-time optimization of consistency settings to achieve the maximum Cassandra performance and still guarantee the strong data consistency under mixed workloads
Dependability Evaluation of Middleware Technology for Large-scale Distributed Caching
Distributed caching systems (e.g., Memcached) are widely used by service
providers to satisfy accesses by millions of concurrent clients. Given their
large-scale, modern distributed systems rely on a middleware layer to manage
caching nodes, to make applications easier to develop, and to apply load
balancing and replication strategies. In this work, we performed a
dependability evaluation of three popular middleware platforms, namely
Twemproxy by Twitter, Mcrouter by Facebook, and Dynomite by Netflix, to assess
availability and performance under faults, including failures of Memcached
nodes and congestion due to unbalanced workloads and network link bandwidth
bottlenecks. We point out the different availability and performance trade-offs
achieved by the three platforms, and scenarios in which few faulty components
cause cascading failures of the whole distributed system.Comment: 2020 IEEE 31st International Symposium on Software Reliability
Engineering (ISSRE 2020