2,492 research outputs found

    More Bang For Your Buck: Quorum-Sensing Capabilities Improve the Efficacy of Suicidal Altruism

    Full text link
    Within the context of evolution, an altruistic act that benefits the receiving individual at the expense of the acting individual is a puzzling phenomenon. An extreme form of altruism can be found in colicinogenic E. coli. These suicidal altruists explode, releasing colicins that kill unrelated individuals, which are not colicin resistant. By committing suicide, the altruist makes it more likely that its kin will have less competition. The benefits of this strategy rely on the number of competitors and kin nearby. If the organism explodes at an inopportune time, the suicidal act may not harm any competitors. Communication could enable organisms to act altruistically when environmental conditions suggest that that strategy would be most beneficial. Quorum sensing is a form of communication in which bacteria produce a protein and gauge the amount of that protein around them. Quorum sensing is one means by which bacteria sense the biotic factors around them and determine when to produce products, such as antibiotics, that influence competition. Suicidal altruists could use quorum sensing to determine when exploding is most beneficial, but it is challenging to study the selective forces at work in microbes. To address these challenges, we use digital evolution (a form of experimental evolution that uses self-replicating computer programs as organisms) to investigate the effects of enabling altruistic organisms to communicate via quorum sensing. We found that quorum-sensing altruists killed a greater number of competitors per explosion, winning competitions against non-communicative altruists. These findings indicate that quorum sensing could increase the beneficial effect of altruism and the suite of conditions under which it will evolve.Comment: 8 pages, 8 figures, ALIFE '14 conferenc

    Data Replication with 2D Mesh Protocol for Data Grid

    Get PDF
    Data replication is one of the widely approach to achieve high data availability and fault tolerant of a system. Data replication in a large scale distributed and dynamic network such as grid has effects the efficiency of data accessing and data consistency. Therefore a mechanism that can maintain the consistency of the data and provide high data availability is needed. This thesis discusses protocols and strategies of replicating data in distributed database and grid environment where network and users are dynamic. There are few protocols that have been implemented in distributed database and grid computing which is discussed such as Read One-Write All (ROWA), Voting (VT), Tree Quorum (TQ), Grid Configuration (GC), Three Dimensional Grid Structure (TDGS), Diagonal Replication in Grid (DRG) and Neighbor Replication in Grid (NRG). In this thesis, we introduce an enhanced replica control protocol, named Enhance Diagonal Replication 2D Mesh (EDR2M) protocol for grid environment and compares its result of availability, and communication cost with the latest protocol TDGS (2001) and NRG (2007). EDR2M proves data consistency by fulfilling the Quorum Intersection Properties. Evaluations that is suitable and applicability for EDR2M protocol solutions via analytical models and simulations. A simulation of EDR2M protocol is developed and the performance metrics evaluated are data availability, and communication cost. By getting the sufficient number of quorum, number of nodes in each quorum, and selecting the middle node of the diagonal sites to have the copy of the data file have improved the availability and communication cost for read and write operation compared to the latest protocol, TDGS (2001) and NRG (2007). Thus, the experiment has showed scientifically that EDR2M is the adequate protocol to achieve high data availability in a low communication cost by providing replica control protocol for a dynamic network such as grid environmen

    An assessment of blockchain consensus protocols for the Internet of Things

    Get PDF
    In a few short years the Internet of Things has become an intrinsic part of everyday life, with connected devices included in products created for homes, cars and even medical equipment. But its rapid growth has created several security problems, with respect to the transmission and storage of vast amounts of customers data, across an insecure heterogeneous collection of networks. The Internet of Things is therefore creating a unique set of risk and problems that will affect most households. From breaches in confidentiality, which could allow users to be snooped on, through to failures in integrity, which could lead to consumer data being compromised; devices are presenting many security challenges to which consumers are ill equipped to protect themselves from. Moreover, when this is coupled with the heterogeneous nature of the industry, and the interoperable and scalability problems it becomes apparent that the Internet of Things has created an increased attack surface from which security vulnerabilities may be easily exploited. However, it has been conjectured that blockchain may provide a solution to the Internet of Things security and scalability problems. Because of blockchain’s immutability, integrity and scalability, it is possible that its architecture could be used for the storage and transfer of Internet of Things data. Within this paper a cross section of blockchain consensus protocols have been assessed against a requirement framework, to establish each consensus protocols strengths and weaknesses with respect to their potential implementation in an Internet of Things blockchain environment

    Survey on replication techniques for distributed system

    Get PDF
    Distributed systems mainly provide access to a large amount of data and computational resources through a wide range of interfaces. Besides its dynamic nature, which means that resources may enter and leave the environment at any time, many distributed systems applications will be running in an environment where faults are more likely to occur due to their ever-increasing scales and the complexity. Due to diverse faults and failures conditions, fault tolerance has become a critical element for distributed computing in order for the system to perform its function correctly even in the present of faults. Replication techniques primarily concentrate on the two fault tolerance manners precisely masking the failures as well as reconfigure the system in response. This paper presents a brief survey on different replication techniques such as Read One Write All (ROWA), Quorum Consensus (QC), Tree Quorum (TQ) Protocol, Grid Configuration (GC) Protocol, Two-Replica Distribution Techniques (TRDT), Neighbour Replica Triangular Grid (NRTG) and Neighbour Replication Distributed Techniques (NRDT). These techniques have its own redeeming features and shortcoming which forms the subject matter of this survey

    A Novel Data Replication and Management Protocol for Mobile Computing Systems

    Get PDF

    Novelty circular neighboring technique using reactive fault tolerance method

    Get PDF
    The availability of the data in a distributed system can be increase by implementing fault tolerance mechanism in the system. Reactive method in fault tolerance mechanism deals with restarting the failed services, placing redundant copies of data in multiple nodes across network, in other words data replication and migrating the data for recovery. Even if the idea of data replication is solid, the challenge is to choose the right replication technique that able to provide better data availability as well as consistency that involves read and write operations on the redundant copies. Circular Neighboring Replication (CNR) technique exploits neighboring policy in replicating the data items in the system performs well with regards to lower copies needed to maintain the system availability at the highest. In a performance analysis with existing techniques, results show that CNR improves system availability by average 37% by offering only two replicas needed to maintain data availability and consistency. The study demonstrates the possibility of the proposed technique and the potential of deploying in larger and complex environment

    Supporting disconnected operations in mobile computing

    Full text link
    Mobile computing has enabled users to seamlessly access databases even when they are on the move. However, in the absence of readily available high-quality communication, users are often forced to operate disconnected from the network. As a result, software applications have to be redesigned to take advantage of this environment while accommodating the new challenges posed by mobility. In particular, there is a need for replication and synchronization services in order to guarantee availability of data and functionality, (including updates) in disconnected mode. To this end we propose a scalable and highly available data replication and management service. The proposed replication technique is compared with a baseline replication technique and shown to exhibit high availability, fault tolerance and minimal access times of the data and services, which are very important in an environment with low-quality communication links.<br /

    Server Placement with Shared Backups for Disaster-Resilient Clouds

    Full text link
    A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40%. Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.Comment: Computer Networks 201
    corecore