49 research outputs found

    A categorization scheme for concurrency control protocols in distributed databases

    Get PDF
    The problem of concurrency control in distributed databases is very complex. As a result, a great number of control algorithms have been proposed in recent years. This research is aimed at the development of a viable categorization scheme for these various algorithms. The scheme is based on the theoretical concept of serializability, but is qualitative in nature. An important class of serializable execution sequences, conflict-preserving-serializable, leads to the identification of fundamental attributes common to all algorithms included in this study. These attributes serve as the underlying philosophy for the categorization scheme. Combined with the two logical approaches of prevention and correction of nonserializability, the result is a flexible and extensive categorization scheme which accounts for all algorithms studied and suggests the possibility of new algorithms

    A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems

    Get PDF
    Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty. Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems

    Cost- and workload-driven data management in the cloud

    Get PDF
    This thesis deals with the challenge of finding the right balance between consistency, availability, latency and costs, captured by the CAP/PACELC trade-offs, in the context of distributed data management in the Cloud. At the core of this work, cost and workload-driven data management protocols, called CCQ protocols, are developed. First, this includes the development of C3, which is an adaptive consistency protocol that is able to adjust consistency at runtime by considering consistency and inconsistency costs. Second, the development of Cumulus, an adaptive data partitioning protocol, that can adapt partitions by considering the application workload so that expensive distributed transactions are minimized or avoided. And third, the development of QuAD, a quorum-based replication protocol, that constructs the quorums in such a way so that, given a set of constraints, the best possible performance is achieved. The behavior of each CCQ protocol is steered by a cost model, which aims at reducing the costs and overhead for providing the desired data management guarantees. The CCQ protocols are able to continuously assess their behavior, and if necessary to adapt the behavior at runtime based on application workload and the cost model. This property is crucial for applications deployed in the Cloud, as they are characterized by a highly dynamic workload, and high scalability and availability demands. The dynamic adaptation of the behavior at runtime does not come for free, and may generate considerable overhead that might outweigh the gain of adaptation. The CCQ cost models incorporate a control mechanism, which aims at avoiding expensive and unnecessary adaptations, which do not provide any benefits to applications. The adaptation is a distributed activity that requires coordination between the sites in a distributed database system. The CCQ protocols implement safe online adaptation approaches, which exploit the properties of 2PC and 2PL to ensure that all sites behave in accordance with the cost model, even in the presence of arbitrary failures. It is crucial to guarantee a globally consistent view of the behavior, as in contrary the effects of the cost models are nullified. The presented protocols are implemented as part of a prototypical database system. Their modular architecture allows for a seamless extension of the optimization capabilities at any level of their implementation. Finally, the protocols are quantitatively evaluated in a series of experiments executed in a real Cloud environment. The results show their feasibility and ability to reduce application costs, and to dynamically adjust the behavior at runtime without violating their correctness

    New Concepts for Virtual Testbeds : Data Mining Algorithms for Blackbox Optimization based on Wait-Free Concurrency and Generative Simulation

    Get PDF
    Virtual testbeds have emerged as a key technology for improving and streamlining complex engineering processes by delivering long-term simulation and assessment of complex designs in virtual environments. In contrast to existing simulation technology, virtual testbeds focus on long-term physically-based simulation of the overall design in its (virtual) environment instead of only focussing on isolated, specific parts for short periods of time. This technology has the major advantage that costly testing, prototyping, and assessment in real-life environments are replaced by a cost-efficient simulation in virtual worlds for comprehensive and long-term analysis of designs. For this purpose, engineering models and their requirements are abstracted into software simulation models and objectives which are executed in virtual assessments. Simulation models are used to predict complex, real systems which can be further a subject to random influences. These predictions are used to examine the effects of individual configuration alternatives without actually realizing them and causing possible negative effects on the real system. Virtual testbeds further offer engineers the opportunity to immersively and naturally interact with their simulation model in these virtual assessments. This enables a greater and comprehensive understanding of possible design flaws early-on in the design process for engineers because they can directly assess their design in the virtual environment, based on the simulation objectives. The fact that virtual testbeds enable these realtime interactive virtual assessments, makes their underlying software infrastructure very complex. One major challenge is to minimize the development time of virtual testbeds in order to efficiently integrate them into the overall engineering process. Usually, this can be achieved by minimizing the underlying concurrency of the testbed and by simplifying its software architecture. However, this may result in a degradation of their very concurrent and asynchronous behavior, which is usually required for immersive and natural virtual interaction. A major goal of virtual testbeds in the engineering process is to find a set of optimal configurations of the simulation model which maximizes all simulation objectives for the specified virtual assessments. Once such a set has been computed, engineers can interactively explore it in the virtual environment. The main challenge is that sophisticated simulation models and their configuration are subject to a multiobjective optimization problem, which usually can not be solved manually by engineers or simulation analysts in feasible time. This is further aggravated because the relationships between simulation model configurations and simulation objectives are mostly unknown, leading to what is known as blackbox simulations. In this thesis, I propose novel data mining algorithms for computing Pareto optimal simulation model configurations, based on an approximation of the feasible design space, for deterministic and stochastic blackbox simulations in virtual testbeds for achieving above stated goal. These novel data mining algorithms lead to an automatic knowledge discovery process that does not need any supervision for its data analysis and assessment for multiobjective optimization problems of simulation model configurations. This achieves the previously stated goal of computing optimal configurations of simulation models for long-term simulations and assessments. Furthermore, I propose two complementary solutions for efficiently integrating massively-parallel virtual testbeds into engineering processes. First, I propose a novel multiversion wait-free data and concurrency management based on hash maps. These wait-free hash maps do not require any standard locking mechanisms and enable low-latency data generation, management and distribution for massively-parallel applications. Second, I propose novel concepts for efficiently code generating above wait-free data and concurrency management for arbitrary massively-parallel simulation applications of virtual testbeds. My generative simulation concept combines a state-of-the-art realtime interactive system design pattern for high maintainability with template code generation based on domain specific modelling. This concept is able to generate massively-parallel simulations and, at the same time, model checks its internal dataflow for possible interface errors. These generative concept overcomes the challenge of efficiently integrating virtual testbeds into engineering processes. These contributions enable for the first time a powerful collaboration between simulation, optimization, visualization and data analysis for novel virtual testbed applications but also overcome and achieve the presented challenges and goals

    Transaction processing in real-time database systems

    Get PDF
    Scheduling transactions in a real-time database requires an integrated approach in which the schedule does not only guarantee execution before the deadline, but also maintains data consistency. The problem has been studied under a common framework which considers both concurrency control issues and the real-time constraints in centralized and distributed transaction processing. A real-time transaction processing model has been defined for a centralized system. The proposed protocols use a unified approach to maximize concurrency while meeting real-time constraints at the same time. In order to test the behavior of the model and the proposed protocols, a real-time transaction processing testbed has been developed using discrete event simulation techniques. The results indicate that different protocols work better under different load scenarios and that the overall performance can be significantly enhanced by modifying the underlying system configuration. Among other system and transaction parameters, the effect of data partitioning, buffer management, preemption, disk contention, locking mode and multiprocessing has been studied;For the distributed environment, new concepts of real-time nested transactions and priority propagation have been proposed. Real-time nested transactions incorporate the deadline requirements in the hierarchical structure of nested transactions. Priority propagation addresses the issues related to transaction aborts in real-time nested transaction processing. The notion of priority ceiling has been used to avoid the priority inversion problem. The proposed protocols exhibit freedom from deadlock and have tightly bounded waiting period. Both of these properties make them very suitable for distributed real-time transaction processing environment

    Efficient Partitioning and Allocation of Data for Workflow Compositions

    Get PDF
    Our aim is to provide efficient partitioning and allocation of data for web service compositions. Web service compositions are represented as partial order database transactions. We accommodate a variety of transaction types, such as read-only and write-oriented transactions, to support workloads in cloud environments. We introduce an approach that partitions and allocates small units of data, called micropartitions, to multiple database nodes. Each database node stores only the data needed to support a specific workload. Transactions are routed directly to the appropriate data nodes. Our approach guarantees serializability and efficient execution. In Phase 1, we cluster transactions based on data requirements. We associate each cluster with an abstract query definition. An abstract query represents the minimal data requirement that would satisfy all the queries that belong to a given cluster. A micropartition is generated by executing the abstract query on the original database. We show that our abstract query definition is complete and minimal. Intuitively, completeness means that all queries of the corresponding cluster can be correctly answered using the micropartition generated from the abstract query. The minimality property means that no smaller partition of the data can satisfy all of the queries in the cluster. We also aim to support efficient web services execution. Our approach reduces the number of data accesses to distributed data. We also aim to limit the number of replica updates. Our empirical results show that the partitioning approach improves data access efficiency over standard partitioning of data. In Phase 2, we investigate the performance improvement via parallel execution.Based on the data allocation achieved in Phase I, we develop a scheduling approach. Our approach guarantees serializability while efficiently exploiting parallel execution of web services. We achieve conflict serializability by scheduling conflicting operations in a predefined order. This order is based on the calculation of a minimal delay requirement. We use this delay to schedule services to preserve serializability without the traditional locking mechanisms

    Partial replication in the database state machine

    Get PDF
    Tese de Doutoramento em Informática - Ramo do Conhecimento em Tecnologias da ProgramaçãoEnterprise information systems are nowadays commonly structured as multi-tier architectures and invariably built on top of database management systems responsible for the storage and provision of the entire business data. Database management systems therefore play a vital role in today’s organizations, from their reliability and availability directly depends the overall system dependability. Replication is a well known technique to improve dependability. By maintaining consistent replicas of a database one can increase its fault tolerance and simultaneously improve system’s performance by splitting the workload among the replicas. In this thesis we address these issues by exploiting the partial replication of databases. We target large scale systems where replicas are distributed across wide area networks aiming at both fault tolerance and fast local access to data. In particular, we envision information systems of multinational organizations presenting strong access locality in which fully replicated data should be kept to a minimum and a judicious placement of replicas should be able to allow the full recovery of any site in case of failure. Our research departs from work on database replication algorithms based on group communication protocols, in detail, multi-master certification-based protocols. At the core of these protocols resides a total order multicast primitive responsible for establishing a total order of transaction execution. A well known performance optimization in local area networks exploits the fact that often the definitive total order of messages closely following the spontaneous network order, thus making it possible to optimistically proceed in parallel with the ordering protocol. Unfortunately, this optimization is invalidated in wide area networks, precisely when the increased latency would make it more useful. To overcome this we present a novel total order protocol with optimistic delivery for wide area networks. Our protocol uses local statistic estimates to independently order messages closely matching the definitive one thus allowing optimistic execution in real wide area networks. Handling partial replication within a certification based protocol is also particularly challenging as it directly impacts the certification procedure itself. Depending on the approach, the added complexity may actually defeat the purpose of partial replication. We devise, implement and evaluate two variations of the Database State Machine protocol discussing their benefits and adequacy with the workload of the standard TPC-C benchmark.Os sistemas de informação empresariais actuais estruturam-se normalmente em arquitecturas de software multi-nível, e apoiam-se invariavelmente sobre um sistema de gestão de bases de dados para o armazenamento e aprovisionamento de todos os dados do negócio. A base de dado desempenha assim um papel vital, sendo a confiabilidade do sistema directamente dependente da sua fiabilidade e disponibilidade. A replicação é uma das formas de melhorar a confiabilidade. Garantindo a coerência de um conjunto de réplicas da base de dados, é possível aumentar simultaneamente a sua tolerância a faltas e o seu desempenho, ao distribuir as tarefas a realizar pelas várias réplicas não sobrecarregando apenas uma delas. Nesta tese, propomos soluções para estes problemas utilizando a replicação parcial das bases de dados. Nos sistemas considerados, as réplicas encontram-se distribuídas numa rede de larga escala, almejando-se simultaneamente obter tolerância a faltas e garantir um acesso local rápido aos dados. Os sistemas propostos têm como objectivo adequarem-se às exigências dos sistemas de informação de multinacionais em que em cada réplica existe uma elevada localidade dos dados acedidos. Nestes sistemas, os dados replicados em todas as réplicas devem ser apenas os absolutamente indispensáveis, e a selecção criteriosa dos dados a colocar em cada réplica, deve permitir em caso de falha a reconstrução completa da base de dados. Esta investigação tem como ponto de partida os protocolos de replicação de bases de dados utilizando comunicação em grupo, em particular os baseados em certificação e execução optimista por parte de qualquer uma das réplicas. O mecanismo fundamental deste tipo de protocolos de replicação é a primitiva de difusão com garantia de ordem total, utilizada para definir a ordem de execução das transacções. Uma optimização normalmente utilizada pelos protocolos de ordenação total é a utilização da ordenação espontânea da rede como indicador da ordem das mensagens, e usar esta ordem espontânea para processar de forma optimista as mensagens em paralelo com a sua ordenação. Infelizmente, em redes de larga escala a espontaneidade de rede é praticamente residual, inviabilizando a utilização desta optimização precisamente no cenário em que a sua utilização seria mais vantajosa. Para contrariar esta adversidade propomos um novo protocolo de ordenação total com entrega optimista para redes de larga escala. Este protocolo utiliza informação estatística local a cada processo para "produzir" uma ordem espontânea muito mais coincidente com a ordem total obtida viabilizando a utilização deste tipo de optimizações em redes de larga escala. Permitir que protocolos de replicação de bases de dados baseados em certificação suportem replicação parcial coloca vários desafios que afectam directamente a forma com é executado o procedimento de certificação. Dependendo da abordagem à replicação parcial, a complexidade gerada pode até comprometer os propósitos da replicação parcial. Esta tese concebe, implementa e avalia duas variantes do protocolo da database state machine com suporte para replicação parcial, analisando os benefícios e adequação da replicação parcial ao teste padronizado de desempenho de bases de dados, o TPC-C.Fundação para a Ciência e a Tecnologia (FCT) - ESCADA (POSI/CHS/33792/2000)
    corecore