35 research outputs found

    Distributed eventual leader election in the crash-recovery and general omission failure models.

    Get PDF
    102 p.Distributed applications are present in many aspects of everyday life. Banking, healthcare or transportation are examples of such applications. These applications are built on top of distributed systems. Roughly speaking, a distributed system is composed of a set of processes that collaborate among them to achieve a common goal. When building such systems, designers have to cope with several issues, such as different synchrony assumptions and failure occurrence. Distributed systems must ensure that the delivered service is trustworthy.Agreement problems compose a fundamental class of problems in distributed systems. All agreement problems follow the same pattern: all processes must agree on some common decision. Most of the agreement problems can be considered as a particular instance of the Consensus problem. Hence, they can be solved by reduction to consensus. However, a fundamental impossibility result, namely (FLP), states that in an asynchronous distributed system it is impossible to achieve consensus deterministically when at least one process may fail. A way to circumvent this obstacle is by using unreliable failure detectors. A failure detector allows to encapsulate synchrony assumptions of the system, providing (possibly incorrect) information about process failures. A particular failure detector, called Omega, has been shown to be the weakest failure detector for solving consensus with a majority of correct processes. Informally, Omega lies on providing an eventual leader election mechanism

    SUPPORTING MULTIPLE ISOLATION LEVELS IN REPLICATED ENVIRONMENTS

    Full text link
    La replicación de bases de datos aporta fiabilidad y escalabilidad aunque hacerlo de forma transparente no es una tarea sencilla. Una base de datos replicada es transparente si puede reemplazar a una base de datos centralizada tradicional sin que sea necesario adaptar el resto de componentes del sistema. La transparencia en bases de datos replicadas puede obtenerse siempre que (a) la gestión de la replicación quede totalmente oculta a dichos componentes y (b) se ofrezca la misma funcionalidad que en una base de datos tradicional. Para mejorar el rendimiento general del sistema, los gestores de bases de datos centralizadas actuales permiten ejecutar de forma concurrente transacciones bajo distintos niveles de aislamiento. Por ejemplo, la especificación del benchmark TPC-C permite la ejecución de algunas transacciones con niveles de aislamiento débiles. No obstante, este soporte todavía no está disponible en los protocolos de replicación. En esta tesis mostramos cómo estos protocolos pueden ser extendidos para permitir la ejecución de transacciones con distintos niveles de aislamiento.Bernabe Gisbert, JM. (2014). SUPPORTING MULTIPLE ISOLATION LEVELS IN REPLICATED ENVIRONMENTS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/36535TESI

    Protocol composition frameworks and modular group communication:models, algorithms and architectures

    Get PDF
    It is noticeable that our society is increasingly relying on computer systems. Nowadays, computer networks can be found at places where it would have been unthinkable a few decades ago, supporting in some cases critical applications on which human lives may depend. Although this growing reliance on networked systems is generally perceived as technological progress, one should bear in mind that such systems are constantly growing in size and complexity, to such an extent that assuring their correct operation is sometimes a challenging task. Hence, dependability of distributed systems has become a crucial issue, and is responsible for an important body of research over the last years. No matter how much effort we put on ensuring our distributed system's correctness, we will be unable to prevent crashes. Therefore, designing distributed systems to tolerate rather than prevent such crashes is a reasonable approach. This is the purpose of fault-tolerance. Among all techniques that provide fault tolerance, replication is the only one that allows the system to mask process crashes. The intuition behind replication is simple: instead of having one instance of a service, we run several of them. If one of the replicas crashes, the rest can take over so that the crash does not prevent the system from delivering the expected service. A replicated service needs to keep all its replicas consistent, and group communication protocols provide abstractions to preserve such consistency. Group communication toolkits have been present since the late 80s. At the beginning, they were monolithic and later on they became modular. Modular group communication toolkits are composed of a set of off-the-shelf protocol modules that can be tailored to the application's needs. Composing protocols requires to set up basic rules that define how modules are composed and interact. Sometimes, these rules are devised exclusively for a particular protocol suite, but it is more sensible to agree on a carefully chosen set of rules and reuse them: this is the essence of protocol composition frameworks. There is a great diversity of protocol composition frameworks at present, and none is commonly considered the best. Furthermore, any attempt to defend a framework as being the best finds strong opposition with plenty of arguments pointing out its drawbacks. Given the complexity of current group communication toolkits and their configurability requirements, we believe that research on modular group communication and protocol composition frameworks must go hand-in-hand. The main goal of this thesis is to advance the state of the art in these two fields jointly and demonstrate how protocols can benefit from frameworks, as well as frameworks can benefit from protocols. The thesis is structured in three parts. Part I focuses on issues related to protocol composition frameworks. Part II is devoted to modular group communication. Finally, Part III presents our modular group communication prototype: Fortika. Part III combines the results of the two previous parts, thereby acting as the convergence point. At the beginning of Part I, we propose four perspectives to describe and compare frameworks on which we base our research on protocol frameworks. These perspectives are: composition model (how the composition looks like), interaction model (how the components interact), concurrency model (how concurrency is managed within the framework), and interaction with the environment (how the framework communicates with the outside world). We compare Appia and Cactus, two relevant protocol composition frameworks with a very different design. Overall, we cannot tell which framework is better. However, a thorough comparison using the four perspectives mentioned above showed that Appia is better in certain aspects, while Cactus is better in other aspects. Concurrency control to avoid race conditions and deadlocks should be ensured by the protocol framework. However this is not always the case. We survey the concurrency model of eight protocol composition frameworks and propose new features to improve concurrency management. Events are the basic mechanism that protocol modules use to communicate with each other. Most protocol composition frameworks include events at the core of their interaction model. However, events are seemingly not as good as one may expect. We point out the drawbacks of events and propose an alternative interaction scheme that uses message headers instead of events: the header-driven model. Part II starts by discussing common features of traditional group communication toolkits and the problems they entail. Then, a new modular group communication architecture is presented. It is less complex, more powerful, and more responsive to failures than traditional architectures. Crash-recovery is a model where crashed processes can be restarted and continue where they were executing just before they crashed. This requires to log the state to disk periodically. We argue that current specifications of atomic broadcast (an important group communication primitive) are not satisfactory. We propose a novel specification that intends to overcome the problems we spotted in existing specifications. Additionally, we come up with two implementations of our atomic broadcast specification and compare their performance. Fortika is the main prototype of the thesis, and the subject of Part III. Fortika is a group communication toolkit written in Java that can use third-party frameworks like Cactus or Appia for composition. Fortika was the testbed for architectures, models and algorithms proposed in the thesis. Finally, we performed software-based fault injection on Fortika to assess its fault-tolerance. The results were valuable to improve the design of Fortika

    Mobile Autonomous Sensing Unit (MASU): a framework that supports distributed pervasive data sensing

    Get PDF
    Pervasive data sensing is a major issue that transverses various research areas and application domains. It allows identifying people’s behaviour and patterns without overwhelming the monitored persons. Although there are many pervasive data sensing applications, they are typically focused on addressing specific problems in a single application domain, making them difficult to generalize or reuse. On the other hand, the platforms for supporting pervasive data sensing impose restrictions to the devices and operational environments that make them unsuitable for monitoring loosely-coupled or fully distributed work. In order to help address this challenge this paper present a framework that supports distributed pervasive data sensing in a generic way. Developers can use this framework to facilitate the implementations of their applications, thus reducing complexity and effort in such an activity. The framework was evaluated using simulations and also through an empirical test, and the obtained results indicate that it is useful to support such a sensing activity in loosely-coupled or fully distributed work scenarios.Peer ReviewedPostprint (published version

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    Distributed Detection and Estimation in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) are typically formed by a large number of densely deployed, spatially distributed sensors with limited sensing, computing, and communication capabilities that cooperate with each other to achieve a common goal. In this dissertation, we investigate the problem of distributed detection, classification, estimation, and localization in WSNs. In this context, the sensors observe the conditions of their surrounding environment, locally process their noisy observations, and send the processed data to a central entity, known as the fusion center (FC), through parallel communication channels corrupted by fading and additive noise. The FC will then combine the received information from the sensors to make a global inference about the underlying phenomenon, which can be either the detection or classification of a discrete variable or the estimation of a continuous one.;In the domain of distributed detection and classification, we propose a novel scheme that enables the FC to make a multi-hypothesis classification of an underlying hypothesis using only binary detections of spatially distributed sensors. This goal is achieved by exploiting the relationship between the influence fields characterizing different hypotheses and the accumulated noisy versions of local binary decisions as received by the FC, where the influence field of a hypothesis is defined as the spatial region in its surrounding in which it can be sensed using some sensing modality. In the realm of distributed estimation and localization, we make four main contributions: (a) We first formulate a general framework that estimates a vector of parameters associated with a deterministic function using spatially distributed noisy samples of the function for both analog and digital local processing schemes. ( b) We consider the estimation of a scalar, random signal at the FC and derive an optimal power-allocation scheme that assigns the optimal local amplification gains to the sensors performing analog local processing. The objective of this optimized power allocation is to minimize the L 2-norm of the vector of local transmission powers, given a maximum estimation distortion at the FC. We also propose a variant of this scheme that uses a limited-feedback strategy to eliminate the requirement of perfect feedback of the instantaneous channel fading coefficients from the FC to local sensors through infinite-rate, error-free links. ( c) We propose a linear spatial collaboration scheme in which sensors collaborate with each other by sharing their local noisy observations. We derive the optimal set of coefficients used to form linear combinations of the shared noisy observations at local sensors to minimize the total estimation distortion at the FC, given a constraint on the maximum average cumulative transmission power in the entire network. (d) Using a novel performance measure called the estimation outage, we analyze the effects of the spatial randomness of the location of the sensors on the quality and performance of localization algorithms by considering an energy-based source-localization scheme under the assumption that the sensors are positioned according to a uniform clustering process
    corecore