12,752 research outputs found
Optimizing recovery protocols for replicated database systems
En la actualidad, el uso de tecnologías de informacíon y sistemas de cómputo tienen una gran influencia en la vida diaria. Dentro de los sistemas informáticos actualmente en uso, son de gran relevancia los sistemas distribuidos por la capacidad que pueden tener para escalar, proporcionar soporte para la tolerancia a fallos y mejorar el desempeño de aplicaciones y proporcionar alta disponibilidad.
Los sistemas replicados son un caso especial de los sistemas distribuidos. Esta tesis está centrada en el área de las bases de datos replicadas debido al uso extendido que en el presente se hace de ellas, requiriendo características como: bajos tiempos de respuesta, alto rendimiento en los procesos, balanceo de carga entre las replicas, consistencia e integridad de datos y tolerancia a fallos.
En este contexto, el desarrollo de aplicaciones utilizando bases de datos replicadas presenta dificultades que pueden verse atenuadas mediante el uso de servicios de soporte a mas bajo nivel tales como servicios de comunicacion y pertenencia. El uso de los servicios proporcionados por los sistemas de comunicación de grupos permiten ocultar los detalles de las comunicaciones y facilitan el diseño de protocolos de replicación y recuperación.
En esta tesis, se presenta un estudio de las alternativas y estrategias empleadas en los protocolos de replicación y recuperación en las bases de datos replicadas. También se revisan diferentes conceptos sobre los sistemas de comunicación de grupos y sincronia virtual. Se caracterizan y clasifican diferentes tipos de protocolos de replicación con respecto a la interacción o soporte que pudieran dar a la recuperación, sin embargo el enfoque se dirige a los protocolos basados en sistemas de comunicación de grupos.
Debido a que los sistemas comerciales actuales permiten a los programadores y administradores de sistemas de bases de datos renunciar en alguna medida a la consistencia con la finalidad de aumentar el rendimiento, es importante determinar el nivel de consistencia necesario. En el caso de las bases de datos replicadas la consistencia está muy relacionada con el nivel de aislamiento establecido entre las transacciones.
Una de las propuestas centrales de esta tesis es un protocolo de recuperación para un protocolo de replicación basado en certificación. Los protocolos de replicación de base de datos basados en certificación proveen buenas bases para el desarrollo de sus respectivos protocolos de recuperación cuando se utiliza el nivel de aislamiento snapshot. Para tal nivel de aislamiento no se requiere que los readsets sean transferidos entre las réplicas ni revisados en la fase de cetificación y ya que estos protocolos mantienen un histórico de la lista de writesets que es utilizada para certificar las transacciones, este histórico provee la información necesaria para transferir el estado perdido por la réplica en recuperación. Se hace un estudio del rendimiento del protocolo de recuperación básico y de la versión optimizada en la que se compacta la información a transferir. Se presentan los resultados obtenidos en las pruebas de la implementación del protocolo de recuperación en el middleware de soporte.
La segunda propuesta esta basada en aplicar el principio de compactación de la informacion de recuperación en un protocolo de recuperación para los protocolos de replicación basados en votación débil. El objetivo es minimizar el tiempo necesario para transfeir y aplicar la información perdida por la réplica en recuperación obteniendo con esto un protocolo de recuperación mas eficiente. Se ha verificado el buen desempeño de este algoritmo a través de una simulación. Para efectuar la simulación se ha hecho uso del entorno de simulación Omnet++. En los resultados de los experimentos puede apreciarse que este protocolo de recuperación tiene buenos resultados en múltiples escenarios.
Finalmente, se presenta la verificación de la corrección de ambos algoritmos de recuperación en el Capítulo 5.Nowadays, information technology and computing systems have a great relevance
on our lives. Among current computer systems, distributed systems are
one of the most important because of their scalability, fault tolerance, performance
improvements and high availability.
Replicated systems are a specific case of distributed system. This Ph.D. thesis is
centered in the replicated database field due to their extended usage, requiring
among other properties: low response times, high throughput, load balancing
among replicas, data consistency, data integrity and fault tolerance.
In this scope, the development of applications that use replicated databases
raises some problems that can be reduced using other fault-tolerant building
blocks, as group communication and membership services. Thus, the usage
of the services provided by group communication systems (GCS) hides several
communication details, simplifying the design of replication and recovery protocols.
This Ph.D. thesis surveys the alternatives and strategies being used in the replication
and recovery protocols for database replication systems. It also summarizes
different concepts about group communication systems and virtual synchrony.
As a result, the thesis provides a classification of database replication
protocols according to their support to (and interaction with) recovery protocols,
always assuming that both kinds of protocol rely on a GCS.
Since current commercial DBMSs allow that programmers and database administrators
sacrifice consistency with the aim of improving performance, it is
important to select the appropriate level of consistency. Regarding (replicated)
databases, consistency is strongly related to the isolation levels being assigned
to transactions.
One of the main proposals of this thesis is a recovery protocol for a replication
protocol based on certification. Certification-based database replication protocols
provide a good basis for the development of their recovery strategies when
a snapshot isolation level is assumed. In that level readsets are not needed in
the validation step. As a result, they do not need to be transmitted to other
replicas. Additionally, these protocols hold a writeset list that is used in the
certification/validation step. That list maintains the set of writesets needed by the recovery protocol. This thesis evaluates the performance of a recovery
protocol based on the writeset list tranfer (basic protocol) and of an optimized
version that compacts the information to be transferred.
The second proposal applies the compaction principle to a recovery protocol
designed for weak-voting replication protocols. Its aim is to minimize the time
needed for transferring and applying the writesets lost by the recovering replica,
obtaining in this way an efficient recovery. The performance of this recovery
algorithm has been checked implementing a simulator. To this end, the Omnet++
simulating framework has been used. The simulation results confirm
that this recovery protocol provides good results in multiple scenarios.
Finally, the correction of both recovery protocols is also justified and presented
in Chapter 5.García Muñoz, LH. (2013). Optimizing recovery protocols for replicated database systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31632TESI
Assessment scales in stroke: clinimetric and clinical considerations
As stroke care has developed, there has been a need to robustly assess the efficacy of interventions both at the level of the individual stroke survivor and in the context of clinical trials. To describe stroke-survivor recovery meaningfully, more sophisticated measures are required than simple dichotomous end points, such as mortality or stroke recurrence. As stroke is an exemplar disabling long-term condition, measures of function are well suited as outcome assessment. In this review, we will describe functional assessment scales in stroke, concentrating on three of the more commonly used tools: the National Institutes of Health Stroke Scale, the modified Rankin Scale, and the Barthel Index. We will discuss the strengths, limitations, and application of these scales and use the scales to highlight important properties that are relevant to all assessment tools. We will frame much of this discussion in the context of "clinimetric" analysis. As they are increasingly used to inform stroke-survivor assessments, we will also discuss some of the commonly used quality-of-life measures. A recurring theme when considering functional assessment is that no tool suits all situations. Clinicians and researchers should chose their assessment tool based on the question of interest and the evidence base around clinimetric properties
Outcome evaluation of research for development work conducted in Ghana and Sri Lanka under the Resource, Recovery and Reuse (RRR) subprogram of the CGIAR Research Program on Water, Land and Ecosystems (WLE).
This is the main report of an external evaluation of the Resource Recovery and Reuse Flagship of the Water Land and Ecosystems (WLE) CGIAR Research Program. WLE commissioned the study. The Evaluators interviewed researchers and partners in two countries, Ghana and Sri Lanka, and in Ghana visited two sites. They also interviewed key international partners and analyzed a wide range of documents, reports and publications. The evaluation was focused on understanding how and in what ways the research and other activities carried out by IWMI and supported by WLE contributed to the outcomes. In essence, the purpose was to understand the specific impact pathways from research to outputs and outcomes
ARMD Workshop on Materials and Methods for Rapid Manufacturing for Commercial and Urban Aviation
This report documents the goals, organization and outcomes of the NASA Aeronautics Research Mission Directorates (ARMD) Materials and Methods for Rapid Manufacturing for Commercial and Urban Aviation Workshop. The workshop began with a series of plenary presentations by leaders in the field of structures and materials, followed by concurrent symposia focused on forecasting the future of various technologies related to rapid manufacturing of metallic materials and polymeric matrix composites, referred to herein as composites. Shortly after the workshop, questionnaires were sent to key workshop participants from the aerospace industry with requests to rank the importance of a series of potential investment areas identified during the workshop. Outcomes from the workshop and subsequent questionnaires are being used as guidance for NASA investments in this important technology area
Optimizing Anti-Phishing Solutions Based on User Awareness, Education and the Use of the Latest Web Security Solutions
Phishing has grown significantly in volume over the time, becoming the most usual web threat today. The present economic crisis is an added argument for the great increase in number of attempts to cheat internet users, both businesses and private ones. The present research is aimed at helping the IT environment get a more precise view over the phishing attacks in Romania; in order to achieve this goal we have designed an application able to retrieve and interpret phishing related data from five other trusted web sources and compile them into a meaningful and more targeted report. As a conclusion, besides making available regular reports, we underline the need for a higher degree of awareness related to this issue.Security, Phishing, Ev-SSL, Security Solutions
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
On the cost of database clusters reconfiguration
Database clusters based on share-nothing replication techniques are currently widely accepted as a practical solution to scalability and availability of the data tier. A key issue when planning such systems is the ability to meet service level agreements when load spikes occur or cluster nodes fail. This translates into the ability to provision and deploy additional nodes. Many current research efforts focus on designing autonomic controllers to perform such reconfiguration, tuned to quickly react to system changes and spawn new replicas based on resource usage and performance measurements. In contrast, we are concerned about the inherent impact of deploying an additional node to an online cluster, considering both the time required to finish such an action as well as the impact on resource usage and performance of the cluster as a whole. If noticeable, such impact hinders the practicability of self-management techniques, since it adds an additional dimension that has to be accounted for. Our approach is to systematically benchmark a number of different reconfiguration scenarios to assess the cost of bringing a new replica online. We consider factors such as: workload characteristics, incremental and parallel recovery, flow control and outdatedness of the recovering replica. As a result, we show that research should be refocused from optimizing the capture and transmition of changes to applying them, which in a realistic setting dominates the cost of the recovery operation.Work supported by the Spanish Government under research grant TIN2006-14738-C02-02
COPD: Optimizing Treatment
A guideline update and an expanded armamentarium have many physicians wondering how best to treat patients with COPD. Chronic obstructive pulmonary disease (COPD) carries a high disease burden. In 2012, it was the 4th leading cause of death worldwide.1,2 In 2015, the World Health Organization updated its Global Initiative for Chronic Obstructive Lung Disease (GOLD) guidelines, classifying patients with COPD based on disease burden as determined by symptoms, airflow obstruction, and exacerbation history.3 These revisions, coupled with expanded therapeutic options within established classes of medications and new combination drugs to treat COPD (TABLE 1),3-6 have led to questions about interclass differences and the best treatment regimen for particular patients
Recommended from our members
Fault diversity among off-the-shelf SQL database servers
Fault tolerance is often the only viable way of obtaining the required system dependability from systems built out of "off-the-shelf" (OTS) products. We have studied a sample of bug reports from four off-the-shelf SQL servers so as to estimate the possible advantages of software fault tolerance - in the form of modular redundancy with diversity - in complex off-the-shelf software. We checked whether these bugs would cause coincident failures in more than one of the servers. We found that very few bugs affected two of the four servers, and none caused failures in more than two. We also found that only four of these bugs would cause identical, undetectable failures in two servers. Therefore, a fault-tolerant server, built with diverse off-the-shelf servers, seems to have a good chance of delivering improvements in availability and failure rates compared with the individual off-the-shelf servers or their replicated, nondiverse configurations
- …