11,070 research outputs found

    Partial replication in the database state machine

    Get PDF
    Tese de Doutoramento em Informática - Ramo do Conhecimento em Tecnologias da ProgramaçãoEnterprise information systems are nowadays commonly structured as multi-tier architectures and invariably built on top of database management systems responsible for the storage and provision of the entire business data. Database management systems therefore play a vital role in today’s organizations, from their reliability and availability directly depends the overall system dependability. Replication is a well known technique to improve dependability. By maintaining consistent replicas of a database one can increase its fault tolerance and simultaneously improve system’s performance by splitting the workload among the replicas. In this thesis we address these issues by exploiting the partial replication of databases. We target large scale systems where replicas are distributed across wide area networks aiming at both fault tolerance and fast local access to data. In particular, we envision information systems of multinational organizations presenting strong access locality in which fully replicated data should be kept to a minimum and a judicious placement of replicas should be able to allow the full recovery of any site in case of failure. Our research departs from work on database replication algorithms based on group communication protocols, in detail, multi-master certification-based protocols. At the core of these protocols resides a total order multicast primitive responsible for establishing a total order of transaction execution. A well known performance optimization in local area networks exploits the fact that often the definitive total order of messages closely following the spontaneous network order, thus making it possible to optimistically proceed in parallel with the ordering protocol. Unfortunately, this optimization is invalidated in wide area networks, precisely when the increased latency would make it more useful. To overcome this we present a novel total order protocol with optimistic delivery for wide area networks. Our protocol uses local statistic estimates to independently order messages closely matching the definitive one thus allowing optimistic execution in real wide area networks. Handling partial replication within a certification based protocol is also particularly challenging as it directly impacts the certification procedure itself. Depending on the approach, the added complexity may actually defeat the purpose of partial replication. We devise, implement and evaluate two variations of the Database State Machine protocol discussing their benefits and adequacy with the workload of the standard TPC-C benchmark.Os sistemas de informação empresariais actuais estruturam-se normalmente em arquitecturas de software multi-nível, e apoiam-se invariavelmente sobre um sistema de gestão de bases de dados para o armazenamento e aprovisionamento de todos os dados do negócio. A base de dado desempenha assim um papel vital, sendo a confiabilidade do sistema directamente dependente da sua fiabilidade e disponibilidade. A replicação é uma das formas de melhorar a confiabilidade. Garantindo a coerência de um conjunto de réplicas da base de dados, é possível aumentar simultaneamente a sua tolerância a faltas e o seu desempenho, ao distribuir as tarefas a realizar pelas várias réplicas não sobrecarregando apenas uma delas. Nesta tese, propomos soluções para estes problemas utilizando a replicação parcial das bases de dados. Nos sistemas considerados, as réplicas encontram-se distribuídas numa rede de larga escala, almejando-se simultaneamente obter tolerância a faltas e garantir um acesso local rápido aos dados. Os sistemas propostos têm como objectivo adequarem-se às exigências dos sistemas de informação de multinacionais em que em cada réplica existe uma elevada localidade dos dados acedidos. Nestes sistemas, os dados replicados em todas as réplicas devem ser apenas os absolutamente indispensáveis, e a selecção criteriosa dos dados a colocar em cada réplica, deve permitir em caso de falha a reconstrução completa da base de dados. Esta investigação tem como ponto de partida os protocolos de replicação de bases de dados utilizando comunicação em grupo, em particular os baseados em certificação e execução optimista por parte de qualquer uma das réplicas. O mecanismo fundamental deste tipo de protocolos de replicação é a primitiva de difusão com garantia de ordem total, utilizada para definir a ordem de execução das transacções. Uma optimização normalmente utilizada pelos protocolos de ordenação total é a utilização da ordenação espontânea da rede como indicador da ordem das mensagens, e usar esta ordem espontânea para processar de forma optimista as mensagens em paralelo com a sua ordenação. Infelizmente, em redes de larga escala a espontaneidade de rede é praticamente residual, inviabilizando a utilização desta optimização precisamente no cenário em que a sua utilização seria mais vantajosa. Para contrariar esta adversidade propomos um novo protocolo de ordenação total com entrega optimista para redes de larga escala. Este protocolo utiliza informação estatística local a cada processo para "produzir" uma ordem espontânea muito mais coincidente com a ordem total obtida viabilizando a utilização deste tipo de optimizações em redes de larga escala. Permitir que protocolos de replicação de bases de dados baseados em certificação suportem replicação parcial coloca vários desafios que afectam directamente a forma com é executado o procedimento de certificação. Dependendo da abordagem à replicação parcial, a complexidade gerada pode até comprometer os propósitos da replicação parcial. Esta tese concebe, implementa e avalia duas variantes do protocolo da database state machine com suporte para replicação parcial, analisando os benefícios e adequação da replicação parcial ao teste padronizado de desempenho de bases de dados, o TPC-C.Fundação para a Ciência e a Tecnologia (FCT) - ESCADA (POSI/CHS/33792/2000)

    Towards a generic group communication service

    Get PDF
    View synchronous group communication is a mature technology that greatly eases the development of reliable distributed applications by enforcing precise message delivery semantics, especially in face of faults. It is therefore found at the core of multiple widely deployed and used middleware products. Although the implementation of a group communication system is a complex task, application developers may benefit from the fact that multiple group communication toolkits are currently available and supported. Unfortunately, each communication toolkit has a different interface, that differs from every other interface in subtile syntactic and semantic aspects. This hinders the design, implementation and maintenance of applications using group communication and forces developers to commit beforehand to a single toolkit, thus imposing a significant hurdle to portability. In this paper we propose jGCS, a generic group communication service for Java, that specifies an interface as well as minimum semantics that allow application portability. This interface accommodates existing group communication services, enabling implementation independence. Furthermore, it provides support for the latest state-of-art mechanisms that have been proposed to improve the performance of group-based applications. To support our claims, we present and experimentally evaluate implementations of jGCS for several major group communication systems, namely, Appia, Spread/FlushSpread and JGroups, and describe the port of a large middleware product to jGCS.This work was partially supported by the IST project GORDA (FP6-IST2-004758

    On Statistically Estimated Optimistic Delivery inWide-Area Total Order Protocols

    Get PDF
    Total order broadcast protocols have been successfully applied as the basis for the construction of many fault-tolerant distributed systems. Unfortunately, the implementation of such a primitive can be expensive both in terms of communication steps and of number of messages exchanged. To alleviate this problem, optimistic total order protocols have been proposed. This paper addresses the problem of offering optimistic total order in geographically wide-area systems. We present a protocol that outperforms previous work, by minimizing the average latency of the optimistic notificatio

    Annotated Bibliography: Understanding Ambulatory Care Practices in the Context of Patient Safety and Quality Improvement.

    Get PDF
    The ambulatory care setting is an increasingly important component of the patient safety conversation. Inpatient safety is the primary focus of the vast majority of safety research and interventions, but the ambulatory setting is actually where most medical care is administered. Recent attention has shifted toward examining ambulatory care in order to implement better health care quality and safety practices. This annotated bibliography was created to analyze and augment the current literature on ambulatory care practices with regard to patient safety and quality improvement. By providing a thorough examination of current practices, potential improvement strategies in ambulatory care health care settings can be suggested. A better understanding of the myriad factors that influence delivery of patient care will catalyze future health care system development and implementation in the ambulatory setting

    Silver Dreams Fund Learning and Evaluation Contract: Final report June 2014

    Get PDF
    This is a summary of the Final Report which presents the findings of the evaluation of the Big Lottery Fund's Silver Dreams Fund conducted by Ecorys.The Silver Dreams Fund was a £10 million programme which sought to address the gaps in provision by challenging organisations to come up with an innovative idea for a project that would "pioneer ways to help vulnerable older people deal more effectively with life-changing events".Our approach involved both formative and summative elements and was based upon a robust and evidence-based outcome evaluation framework. In addition, we have also undertaken an evaluation of the new programme management processes employed by the Big Lottery Fund which has been reported separately.In summary, the evaluation involved:- development of an evaluation framework and common indicators to measure outcomes;- provision of a package of self-evaluation support to projects;- programme level work to provide independent primary qualitative research and to validate findings from self-evaluations;- a range of learning activities; and- analysis and reporting

    The Arecibo Legacy Fast ALFA Survey: VI. Second HI Source Catalog of the Virgo Cluster Region

    Full text link
    We present the third installment of HI sources extracted from the Arecibo Legacy Fast ALFA extragalactic survey. This dataset continues the work of the Virgo ALFALFA catalog. The catalogs and spectra published here consist of data obtained during the 2005 and 2006 observing sessions of the survey. The catalog consists of 578 HI detections within the range 11h 36m < R.A.(J2000) < 13h 52m and +08 deg < Dec.(J2000) < +12 deg, and cz_sun < 18000 km/s. The catalog entries are identified with optical counterparts where possible through the examination of digitized optical images. The catalog detections can be classified into three categories: (a) detections of high reliability with S/N > 6.5; (b) high velocity clouds in the Milky Way or its periphery; and (c) signals of lower S/N which coincide spatially with an optical object and known redshift. 75% of the sources are newly published HI detections. Of particular note is a complex of HI clouds projected between M87 and M49 that do not coincide with any optical counterparts. Candidate objects without optical counterparts are few. The median redshift for this sample is 6500 km/s and the cz distribution exhibits the local large scale structure consisting of Virgo and the background void and the A1367-Coma supercluster regime at cz_sun ~7000 km/s. Position corrections for telescope pointing errors are applied to the dataset by comparing ALFALFA continuum centroid with those cataloged in the NRAO VLA Sky Survey. The uncorrected positional accuracy averages 27 arcsec ~(21 arcsec ~median) for all sources with S/N > 6.5 and is of order ~21 arcsec ~(16 arcsec ~median) for signals with S/N > 12. Uncertainties in distances toward the Virgo cluster can affect the calculated HI mass distribution.Comment: 25 pages, 1 Table, 8 figures, Accepted by the Astronomical Journa

    Autonomic State Management for Optimistic Simulation Platforms

    Get PDF
    We present the design and implementation of an autonomic state manager (ASM) tailored for integration within optimistic parallel discrete event simulation (PDES) environments based on the C programming language and the executable and linkable format (ELF), and developed for execution on x8664 architectures. With ASM, the state of any logical process (LP), namely the individual (concurrent) simulation unit being part of the simulation model, is allowed to be scattered on dynamically allocated memory chunks managed via standard API (e.g., malloc/free). Also, the application programmer is not required to provide any serialization/deserialization module in order to take a checkpoint of the LP state, or to restore it in case a causality error occurs during the optimistic run, or to provide indications on which portions of the state are updated by event processing, so to allow incremental checkpointing. All these tasks are handled by ASM in a fully transparent manner via (A) runtime identification (with chunk-level granularity) of the memory map associated with the LP state, and (B) runtime tracking of the memory updates occurring within chunks belonging to the dynamic memory map. The co-existence of the incremental and non-incremental log/restore modes is achieved via dual versions of the same application code, transparently generated by ASM via compile/link time facilities. Also, the dynamic selection of the best suited log/restore mode is actuated by ASM on the basis of an innovative modeling/optimization approach which takes into account stability of each operating mode with respect to variations of the model/environmental execution parameters

    Scalable service-oriented replication with flexible consistency guarantee in the cloud

    Get PDF
    Replication techniques are widely applied in and for cloud to improve scalability and availability. In such context, the well-understood problem is how to guarantee consistency amongst different replicas and govern the trade-off between consistency and scalability requirements. Such requirements are often related to specific services and can vary considerably in the cloud. However, a major drawback of existing service-oriented replication approaches is that they only allow either restricted consistency or none at all. Consequently, service-oriented systems based on such replication techniques may violate consistency requirements or not scale well. In this paper, we present a Scalable Service Oriented Replication (SSOR) solution, a middleware that is capable of satisfying applications’ consistency requirements when replicating cloud-based services. We introduce new formalism for describing services in service-oriented replication. We propose the notion of consistency regions and relevant service oriented requirements policies, by which trading between consistency and scalability requirements can be handled within regions. We solve the associated sub-problem of atomic broadcasting by introducing a Multi-fixed Sequencers Protocol (MSP), which is a requirements aware variation of the traditional fixed sequencer approach. We also present a Region-based Election Protocol (REP) that elastically balances the workload amongst sequencers. Finally, we experimentally evaluate our approach under different loads, to show that the proposed approach achieves better scalability with more flexible consistency constraints when compared with the state-of-the-art replication technique
    corecore