190 research outputs found

    Compensation methods to support cooperative applications: A case study in automated verification of schema requirements for an advanced transaction model

    Get PDF
    Compensation plays an important role in advanced transaction models, cooperative work and workflow systems. A schema designer is typically required to supply for each transaction another transaction to semantically undo the effects of . Little attention has been paid to the verification of the desirable properties of such operations, however. This paper demonstrates the use of a higher-order logic theorem prover for verifying that compensating transactions return a database to its original state. It is shown how an OODB schema is translated to the language of the theorem prover so that proofs can be performed on the compensating transactions

    Issues about the Adoption of Formal Methods for Dependable Composition of Web Services

    Full text link
    Web Services provide interoperable mechanisms for describing, locating and invoking services over the Internet; composition further enables to build complex services out of simpler ones for complex B2B applications. While current studies on these topics are mostly focused - from the technical viewpoint - on standards and protocols, this paper investigates the adoption of formal methods, especially for composition. We logically classify and analyze three different (but interconnected) kinds of important issues towards this goal, namely foundations, verification and extensions. The aim of this work is to individuate the proper questions on the adoption of formal methods for dependable composition of Web Services, not necessarily to find the optimal answers. Nevertheless, we still try to propose some tentative answers based on our proposal for a composition calculus, which we hope can animate a proper discussion

    A Semantic Consistency Model to Reduce Coordination in Replicated Systems

    Get PDF
    Large-scale distributed applications need to be available and responsive to satisfy millions of users, which can be achieved by having data geo-replicated in multiple replicas. However, a partitioned system cannot sustain availability and consistency at fully. The usage of weak consistency models might lead to data integrity violations, triggered by problematic concurrent updates, such as selling twice the last ticket on a flight company service. To overcome possible conflicts, programmers might opt to apply strong consistency, which guarantees a total order between operations, while preserving data integrity. Nevertheless, the illusion of being a non-replicated system affects its availability. In contrast, weaker notions might be used, such as eventual consistency, that boosts responsiveness, as operations are executed directly at the source replica and their effects are propagated to remote replicas in the background. However, this approach might put data integrity at risk. Current protocols that preserve invariants rely on, at least, causal consistency, a consistency model that maintains causal dependencies between operations. In this dissertation, we propose a protocol that includes a semantic consistency model. This consistency model stands between eventual consistency and causal consistency. We guarantee better performance comparing with causal consistency, and ensure data integrity. Through semantic analysis, relying on the static analysis tool CISE3, we manage to limit the maximum number of dependencies that each operation will have. To support the protocol, we developed a communication algorithm in a cluster. Additionally, we present an architecture that uses Akka, an actor-based middleware in which actors communicate by exchanging messages. This architecture adopts the publish/subscribe pattern and includes data persistence. We also consider the stability of operations, as well as a dynamic cluster environment, ensuring the convergence of the replicated state. Finally, we perform an experimental evaluation regarding the performance of the algorithm using standard case studies. The evaluation confirms that by relying on semantic analysis, the system requires less coordination between the replicas than causal consistency, ensuring data integrity.Aplicações distribuídas em larga escala necessitam de estar disponíveis e de serem responsivas para satisfazer milhões de utilizadores, o que pode ser alcançado através da geo-replicação dos dados em múltiplas réplicas. No entanto, um sistema particionado não consegue garantir disponibilidade e consistência na sua totalidade. O uso de modelos de consistência fraca pode levar a violações da integridade dos dados, originadas por escritas concorrentes problemáticas. Para superar possíveis conflitos, os programadores podem optar por aplicar modelos de consistência forte, originando uma ordem total das operações, assegurando a integridade dos dados. Em contrapartida, podem ser utilizadas noções mais fracas, como a consistência eventual, que aumenta a capacidade de resposta, uma vez que as operações são executadas diretamente na réplica de origem e os seus efeitos são propagados para réplicas remotas. No entanto, esta abordagem pode colocar em risco a integridade dos dados. Os protocolos existentes que preservam as invariantes dependem, pelo menos, da consistência causal, um modelo de consistência que mantém as dependências causais entre operações. Nesta dissertação propomos um protocolo que inclui um modelo de consistência semântica. Este modelo situa-se entre a consistência eventual e a consistência causal. Garantimos um melhor desempenho em comparação com a consistência causal, e asseguramos a integridade dos dados. Através de uma análise semântica, obtida através da ferramenta de análise estática CISE3, conseguimos limitar o número de dependências de cada operação. Para suportar o protocolo, desenvolvemos um algoritmo de comunicação entre um aglomerado de réplicas. Adicionalmente, apresentamos uma arquitetura que utiliza Akka, um middleware baseado em atores que trocam mensagens entre si. Esta arquitetura utiliza o padrão publish/subscribe e inclui a persistência dos dados. Consideramos também a estabilidade das operações, bem como um ambiente dinâmico de réplicas, assegurando a convergência do estado. Por último, apresentamos a avaliação do desempenho do algoritmo desenvolvido, que confirma que a análise semântica das operações requer menos coordenação entre as réplicas que a consistência causal

    Flashix: modular verification of a concurrent and crash-safe flash file system

    Get PDF
    The Flashix project has developed the first realistic verified file system for Flash memory. This paper gives an overview over the project and the theory used. Specification is based on modular components and subcomponents, which may have concurrent implementations connected via refinement. Functional correctness and crash-safety of each component is verified separately. We highlight some components that were recently added to improve efficiency, such as file caches and concurrent garbage collection. The project generates 18K of C code that runs under Linux. We evaluate how efficiency has improved and compare to UBIFS, the most recent flash file system implementation available for the Linux kernel

    An integrated concurrency control in object-oriented database systems.

    Get PDF
    First, the dissertation discusses three important issues of concurrency control in OODBs. These include conflict among methods, class hierarchy locking, and nested method invocations. The previous works for each issue are presented, and their advantages and disadvantages are also discussed. Then, an integrated concurrency control which addresses all three issues is proposed. For conflict among methods, a finer locking granularity, such as an attribute and an individual class object, is adopted for instance access and class definition access so that higher concurrency is achieved. Especially, for instance access, higher concurrency is obtained using run-time information. Also, locks are required for instance method invocations instead of atomic operation invocations so that locking overhead is reduced. For class hierarchy locking, locking overheads are reduced using special classes which are based on access frequency information on classes. Finally, for nested method invocations, semantic information is used in order to provide higher concurrency among methods. Also, parent/children parallelism is adopted for better performance.Finally, a performance study is conducted by means of simulation using the 007 benchmark. The simulation results show that, in terms of transaction response time and lock waiting time, the proposed scheme performs the best, Malta the second best, and Orion the worst.Object-oriented databases (OODBs) have been adopted for non-standard applications requiring advanced modeling power, in order to handle complex data and relationships among such data. One of the important characteristics in database system is manipulation of shared data. That is, database systems, including OODBs, allow shared data to be accessed by multiple users at the same time. Concurrency control is a mechanism used to coordinate access to the multi-user databases so that the consistency of the database is maintained. In order to provide good performance, it is very important that concurrency control schemes incur low overhead and increase concurrency among users. This dissertation presents a concurrency control scheme in OODBs that meets those requirements.Secondly, an analytical model is constructed to measure the performance of concurrency control in an OODB system. Using this model, the proposed technique is then compared with the two existing techniques, Orion and Malta. The analytical results show that the proposed scheme gives the best transaction response time, Malta the second best, and Orion the worst

    Self-adjusting multi-granularity locking protocol for object-oriented databases

    Get PDF
    Object-oriented databases have the potential to be used for data-intensive, multi-user applications that are not well served by traditional applications. Despite the fact that there has been extensive research done for relational databases in the area of concurrency control; many of the approaches are not suitable for the complex data model of object-oriented databases. This thesis presents a self-adjusting multi-granularity locking protocol (SAML) which facilitates choosing an appropriate locking granule according to the requirements of the transactions and encompasses less overhead and provides better concurrency compared to some of the existing protocols. Though there has been another adaptive multi-granularity protocol called AMGL [1] which provides the same degree of concurrency as SAML: SAML has been proven to have significantly reduced the number of locks and hence the locking overhead compared to AMGL. Experimental results show that SAML performs the best when the workload is high in the system and transactions are long-lived
    corecore