7 research outputs found

    Combining Application-Level and Database-Level Monitoring to Analyze the Performance Impact of Database Lock Contention

    Get PDF
    Abstract Database lock contention can severely impact application performance and limit scalability. This can be of particular importance when major modifications are made to transactional software, such as large refactorings or modernization projects. In order to assess the criticality of such modifications, it is necessary to measure the current degree of database lock contention, and attribute the effects to the appropriate sections of the application. However, current monitoring tools do not provide both application-level and database-level monitoring data with sufficient detail at the same time. In this paper, we present an approach to combine application-level and database-level monitoring to measure lock contention on a per-section basis, and present first experimental results from a prototypical implementation for PostgreSQL

    Quantifying the Impact of Replication on the Quality-of-Service in Cloud Databases

    No full text
    Cloud databases achieve high availability by automatically replicating data on multiple nodes. However, the overhead caused by the replication process can lead to an increase in the mean and variance of transaction response times, causing unforeseen impacts on the offered quality-of-service (QoS). In this paper, we propose a measurement-driven methodology to predict the impact of replication on Database-as-a-Service (DBaaS) environments. Our methodology uses operational data to parameterize a closed queueing network model of the database cluster together with a Markov model that abstracts the dynamic replication process. Experiments on Amazon RDS show that our methodology predicts response time mean and percentiles with errors of just 1% and 15% respectively, and under operational conditions that are significantly different from the ones used for model parameterization. We show that our modeling approach surpasses standard modeling methods and illustrate the applicability of our methodology for automated DBaaS provisioning

    Optimistic Causal Consistency for Geo-Replicated Key-Value Stores

    Get PDF
    Causal consistency is an attractive consistency model for geo-replicated data stores because it hits a sweet spot in the ease of programmability vs performance trade-off. In this paper we propose a new approach to causal consistency, which we call Optimistic Causal Consistency (OCC). The optimism of our approach lies in the fact that updates from a remote data center are immediately made visible to clients in the local data center. A client, hence, always reads the freshest version of an item, whose dependencies, however, might have not been installed in the local data center yet. When serving a read request, a server can detect whether it has not received such dependencies yet. This is achieved without inter-server synchronization thanks to cheap dependency meta-data supplied by the client. Upon detecting a missing dependency, the server waits to receive it. This approach contrasts with the design of existing systems, which are prone to expose stale versions of a data items, to ensure that clients only see versions whose dependencies have already been replicated in the local data center. OCC explores a novel trade-off in the landscape of consistency models. Because network partitions are practically rare events, OCC partially trades availability to improve other performance metrics. On the one side, OCC maximizes the freshness of data returned to clients and reduces the communication overhead. On the other side, a server might need to wait before serving a client’s request, leading the system to be unavailable in case of a network partition. To overcome this limitation, we propose a recovery mechanism that allows an OCC system to fall back to a pessimistic protocol to recover availability. We implement OCC in a new system, which we call POCC. We compare POCC against a recent (pessimistic) approach to causal consistency using heterogeneous workloads on an Amazon AWS deployment encompassing up to 96 nodes scattered over 3 data centers. We show that POCC is able to maximize the freshness of data returned to client while providing comparable or better performance than its pessimistic counterpart in a wide range of production-like workloads

    Predicting Replicated Database Scalability from Standalone Database Profiling

    Get PDF
    This paper develops analytical models to predict the throughput and the response time of a replicated database using measurements of the workload on a standalone database. These models allow workload scalability to be estimated before the replicated system is deployed, making the technique useful for capacity planning and dynamic service provisioning. The models capture the scalability limits stemming from update propagation and aborts for both multi-master and single-master replicated databases that support snapshot isolation. We validate the models by comparing their throughput and response time predictions against experimental measurements on two prototype replicated database systems running the TPC-W and RUBiS workloads. We show that the model predictions match the experimental results for both the multi-master and single-master designs and for the various workload mixes of TPC-W and RUBiS

    Quantifying Performance Costs of Database Fine-Grained Access Control

    Get PDF
    Fine-grained access control is a conceptual approach to addressing database security requirements. In relational database management systems, fine-grained access control refers to access restrictions enforced at the row, column, or cell level. While a number of commercial implementations of database fine-grained access control are available, there are presently no generalized approaches to implementing fine-grained access control for relational database management systems. Fine-grained access control is potentially a good solution for database professionals and system architects charged with designing database applications that implement granular security or privacy protection features. However, in the oral tradition of the database community, fine-grained access control is spoken of as imposing significant performance penalties, and is therefore best avoided. Regardless, there are current and emerging social, legal, and economic forces that mandate the need for efficient fine-grained access control in relational database management systems. In the study undertaken, the author was able to quantify the performance costs associated with four common implementations of fine-grained access control for relational database management systems. Security benchmarking was employed as the methodology to quantify performance costs. Synthetic data from the TPC-W benchmark as well as representative data from a real-world application were utilized in the benchmarking process. A simple graph-base performance model for Fine-grained Access Control Evaluation (FACE) was developed from benchmark data collected during the study. The FACE model is intended for use in predicting throughput and response times for relational database management systems that implement fine-grained access control using one of the common fine-grained access control mechanisms - authorization views, the Hippocratic Database, label-based access control, and transparent query rewrite. The author also addresses the issue of scalability for fine-grained access control mechanisms that were evaluated in the study
    corecore