41 research outputs found

    Creation and management of versions in multiversion data warehouse

    Get PDF
    ABSTRACT A data warehouse (DW) provides an information for analytical processing, decision making, and data mining tools. On the one hand, the structure and content of a data warehouse reflects a real world, i.e. data stored in a DW come from real production systems. On the other hand, a DW and its tools may be used for predicting trends and simulating a virtual business scenarios. This activity is often called the what-if analysis. Traditional DW systems have static structure of their schemas and relationships between data, and therefore they are not able to support any dynamics in their structure and content. For these purposes, multiversion data warehouses seem to be very promising. In this paper we present a concept and an ongoing implementation of a multiversion data warehouse that is capable of handling changes in the structure of its schema as well as simulating alternative business scenarios

    A Strategy for Reducing I/O and Improving Query Processing Time in an Oracle Data Warehouse Environment

    Get PDF
    In the current information age as the saying goes, time is money. For the modern information worker, decisions must often be made quickly. Every extra minute spent waiting for critical data could mean the difference between financial gain and financial ruin. Despite the importance of timely data retrieval, many organizations lack even a basic strategy for improving the performance of their data warehouse based reporting systems. This project explores the idea that a strategy making use of three database performance improvement techniques can reduce I/O (input/output operations) and improve query processing time in an information system designed for reporting. To demonstrate that these performance improvement goals can be achieved, queries were run on ordinary tables and then on tables utilizing the performance improvement techniques. The I/O statistics and processing times for the queries were compared to measure the amount of performance improvement. The measurements were also used to explain how these techniques may be more or less effective under certain circumstances, such as when a particular type of query is run. The collected I/O and time based measurements showed a varying degree of improvement for each technique based on the query used. A need to match the types of queries commonly run on the system to the performance improvement technique being implemented was found to be an important consideration. The results indicated that in a reporting environment these performance improvement techniques have the potential to reduce I/O and improve query performance

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    Partial replication in the database state machine

    Get PDF
    Tese de Doutoramento em Informática - Ramo do Conhecimento em Tecnologias da ProgramaçãoEnterprise information systems are nowadays commonly structured as multi-tier architectures and invariably built on top of database management systems responsible for the storage and provision of the entire business data. Database management systems therefore play a vital role in today’s organizations, from their reliability and availability directly depends the overall system dependability. Replication is a well known technique to improve dependability. By maintaining consistent replicas of a database one can increase its fault tolerance and simultaneously improve system’s performance by splitting the workload among the replicas. In this thesis we address these issues by exploiting the partial replication of databases. We target large scale systems where replicas are distributed across wide area networks aiming at both fault tolerance and fast local access to data. In particular, we envision information systems of multinational organizations presenting strong access locality in which fully replicated data should be kept to a minimum and a judicious placement of replicas should be able to allow the full recovery of any site in case of failure. Our research departs from work on database replication algorithms based on group communication protocols, in detail, multi-master certification-based protocols. At the core of these protocols resides a total order multicast primitive responsible for establishing a total order of transaction execution. A well known performance optimization in local area networks exploits the fact that often the definitive total order of messages closely following the spontaneous network order, thus making it possible to optimistically proceed in parallel with the ordering protocol. Unfortunately, this optimization is invalidated in wide area networks, precisely when the increased latency would make it more useful. To overcome this we present a novel total order protocol with optimistic delivery for wide area networks. Our protocol uses local statistic estimates to independently order messages closely matching the definitive one thus allowing optimistic execution in real wide area networks. Handling partial replication within a certification based protocol is also particularly challenging as it directly impacts the certification procedure itself. Depending on the approach, the added complexity may actually defeat the purpose of partial replication. We devise, implement and evaluate two variations of the Database State Machine protocol discussing their benefits and adequacy with the workload of the standard TPC-C benchmark.Os sistemas de informação empresariais actuais estruturam-se normalmente em arquitecturas de software multi-nível, e apoiam-se invariavelmente sobre um sistema de gestão de bases de dados para o armazenamento e aprovisionamento de todos os dados do negócio. A base de dado desempenha assim um papel vital, sendo a confiabilidade do sistema directamente dependente da sua fiabilidade e disponibilidade. A replicação é uma das formas de melhorar a confiabilidade. Garantindo a coerência de um conjunto de réplicas da base de dados, é possível aumentar simultaneamente a sua tolerância a faltas e o seu desempenho, ao distribuir as tarefas a realizar pelas várias réplicas não sobrecarregando apenas uma delas. Nesta tese, propomos soluções para estes problemas utilizando a replicação parcial das bases de dados. Nos sistemas considerados, as réplicas encontram-se distribuídas numa rede de larga escala, almejando-se simultaneamente obter tolerância a faltas e garantir um acesso local rápido aos dados. Os sistemas propostos têm como objectivo adequarem-se às exigências dos sistemas de informação de multinacionais em que em cada réplica existe uma elevada localidade dos dados acedidos. Nestes sistemas, os dados replicados em todas as réplicas devem ser apenas os absolutamente indispensáveis, e a selecção criteriosa dos dados a colocar em cada réplica, deve permitir em caso de falha a reconstrução completa da base de dados. Esta investigação tem como ponto de partida os protocolos de replicação de bases de dados utilizando comunicação em grupo, em particular os baseados em certificação e execução optimista por parte de qualquer uma das réplicas. O mecanismo fundamental deste tipo de protocolos de replicação é a primitiva de difusão com garantia de ordem total, utilizada para definir a ordem de execução das transacções. Uma optimização normalmente utilizada pelos protocolos de ordenação total é a utilização da ordenação espontânea da rede como indicador da ordem das mensagens, e usar esta ordem espontânea para processar de forma optimista as mensagens em paralelo com a sua ordenação. Infelizmente, em redes de larga escala a espontaneidade de rede é praticamente residual, inviabilizando a utilização desta optimização precisamente no cenário em que a sua utilização seria mais vantajosa. Para contrariar esta adversidade propomos um novo protocolo de ordenação total com entrega optimista para redes de larga escala. Este protocolo utiliza informação estatística local a cada processo para "produzir" uma ordem espontânea muito mais coincidente com a ordem total obtida viabilizando a utilização deste tipo de optimizações em redes de larga escala. Permitir que protocolos de replicação de bases de dados baseados em certificação suportem replicação parcial coloca vários desafios que afectam directamente a forma com é executado o procedimento de certificação. Dependendo da abordagem à replicação parcial, a complexidade gerada pode até comprometer os propósitos da replicação parcial. Esta tese concebe, implementa e avalia duas variantes do protocolo da database state machine com suporte para replicação parcial, analisando os benefícios e adequação da replicação parcial ao teste padronizado de desempenho de bases de dados, o TPC-C.Fundação para a Ciência e a Tecnologia (FCT) - ESCADA (POSI/CHS/33792/2000)

    Design of efficient and elastic storage in the cloud

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

    Get PDF
    A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily

    Analysis of costs and delivery intervals for multiple-release software

    Get PDF
    Project managers of large software projects, and particularly those associated with Internet Business-to-Business (B2B) or Business-to-Customer (B2C) applications, are under pressure to capture market share by delivering reliable software with cost and timing constraints. An earlier delivery time may help the E-commerce software capture a larger market share. However, early delivery sometimes means lower quality. In addition, most of the time the scale of the software is so large that incremental multiple releases are warranted. A Multiple-Release methodology has been developed to optimize the most efficient and effective delivery intervals of the various releases of software products, taking into consideration software costs and reliability. The Multiple-Release methodology extends existing software cost and reliability models, meets the needs of large software development firms, and gives a navigation guide to software industrial managers. The main decision factors for the multiple releases include the delivery interval of each release, the market value of the features in the release, and the software costs associated with testing and error penalties. The input of these factors was assessing using Design of Experiments (DOE). The costs included in the research are based on a salary survey of software staff at companies in the New Jersey area and on budgets of software development teams. The Activity Based Cost (ABC) method was used to determine costs on the basis of job functions associated with the development of the software. It is assumed that the error data behavior follows the Non-Homogeneous Poisson Processes (NHPP)

    Distributed transaction processing in the Escada protocol

    Get PDF
    Replicação é uma técnica essencial para a implementação de bases de dados tolerantes a faltas, sendo também frequentemente utilizada para melhorar o seu desempenho. Infelizmente, quando critérios de consistência forte e a capacidade de actualização a partir de qualquer réplica são consideradas, os protocolos de replicação actualmente disponíveis nos gestores de bases de dados comerciais não apresentam um bom desempenho. O problema está relacionado ao custo produzido pelas interacções entre as réplicas no intuito de garantir a consistência, e pelos protocolos de terminação que procuram assegurar que todas as réplicas concordam com o resultado da transacção. De uma maneira geral, o número de “aborts”, “deadlocks” e mensagens trocadas cresce de maneira drástica, ao aumentar o número de réplicas. Em outros trabalhos, foi provado que a replicação de base de dados num cenário desses é impraticável. No intuito de resolver esses problemas, diversos estudos têm sido desenvolvidos. Inicialmente, a maioria deles deixou de lado os requisitos de consistência forte ou a capacidade de actualização a partir de qualquer réplica para conseguir soluções viáveis. Recentemente, protocolos de replicação baseados em comunicação em grupo foram propostos, nos quais os requisitos de consistência forte e actualização a partir de qualquer réplica são preservados e os problemas contornados. Neste contexto encontra-se o projecto Escada. Sucintamente, ele tem como objectivo estudar, projectar e implementar mecanismos de replicação transaccionais adequados para sistemas distribuídos de larga escala. Em particular, o projecto explora as técnicas de replicação parcial para fornecer critérios de consistência forte sem introduzir pesos significantes de sincronização e sem prejudicar o desempenho. Nesta dissertação, extendemos o projecto Escada com um modelo e um mecanismo de processamento de consultas distribuído, o que é um requisito inevitável num ambiente de replicação parcial. Além disso, explorando características dos protocolos, propomos um cache semântico para reduzir o peso gerado ao aceder a réplicas remotas. Também melhoramos o processo de certificação, ao procurar reduzir os “aborts”, utilizando informação semântica presente nas transacções. Finalmente, para avaliar os protocolos desenvolvidos pelo projecto Escada, o cache semântico e o processo de certificação utilizamos um modelo de simulação que combina código simulado e real, o que nos permite avaliar nossas propostas em diferentes cenários e configurações. Mais do que isso, ao invés de usar cargas fictícias, submetemos nossas propostas a cargas baseadas nos “benchmarks” TPC-W e TPC-C.Database replication is an invaluable technique to implement fault-tolerant databases, being also frequently used to improve database performance. Unfortunately, when strong consistency among the replicas and the ability to update the database at any of the replicas are considered, the replication protocols do not scale up. The problem is related to the number of interactions among the replicas in order to guarantee consistency and to the protocols used to ensure that all the replicas agree on transactions’ result. Roughly, the number of aborts, deadlocks and messages exchanged among the replicas grows drastically, when the number of replicas increases. In related works, it has been proved that database replication in such a scenario is impractical. In order to overcome these problems, several studies have been developed. Initially, most of them released the strong consistency and the update-anywhere requirements to achieve feasible solutions. Recently, replication protocols based on group communication were proposed, in which the strong consistency and update-anywhere requirements are preserved and the problems circumvented. This is the context of the Escada project. Briefly, it aims to study, design and implement transaction replication mechanisms suited to large scale distributed systems. In particular, the project exploits partial replication techniques to provide strong consistency criteria without introducing significant synchronization and performance overheads. In this thesis, we augment the Escada with a distributed query processing model and mechanism, which is an inevitable requirement in a partially replicated environment. Moreover, exploiting characteristics of its protocols, we propose a semantic cache to reduce the overhead generated while accessing remote replicas. We also improve the certification process, while attempting to reduce aborts using the semantic information available in the transactions. Finally, to evaluate the Escada protocols, the semantic caching and the certification process, we use a simulation model that combines simulated and real code, which allows to evaluate our proposals under distinct scenarios and configurations. Furthermore, instead of using unrealistic workloads, we test our proposals using workloads based on the TPC-W and TPC-C benchmarks.Fundação para a Ciência e a Tecnologia - POSI/CHS/41285/2001
    corecore