191 research outputs found

    Permission-based fault tolerant mutual exclusion algorithm for mobile Ad Hoc networks

    Get PDF
    This study focuses on resolving the problem of mutual exclusion in mobile ad hoc networks. A Mobile Ad Hoc Network (MANET) is a wireless network without fixed infrastructure. Nodes are mobile and topology of MANET changes very frequently and unpredictably. Due to these limitations, conventional mutual exclusion algorithms presented for distributed systems (DS) are not applicable for MANETs unless they attach to a mechanism for dynamic changes in their topology. Algorithms for mutual exclusion in DS are categorized into two main classes including token-based and permission-based algorithms. Token-based algorithms depend on circulation of a specific message known as token. The owner of the token has priority for entering the critical section. Token may lose during communications, because of link failure or failure of token host. However, the processes for token-loss detection and token regeneration are very complicated and time-consuming. Token-based algorithms are generally non-fault-tolerant (although some mechanisms are utilized to increase their level of fault-tolerance) because of common problem of single token as a single point of failure. On the contrary, permission-based algorithms utilize the permission of multiple nodes to guarantee mutual exclusion. It yields to high traffic when number of nodes is high. Moreover, the number of message transmissions and energy consumption increase in MANET by increasing the number of mobile nodes accompanied in every decision making cycle. The purpose of this study is to introduce a method of managing the critical section,named as Ancestral, having higher fault-tolerance than token-based and fewer message transmissions and traffic rather that permission-based algorithms. This method makes a tradeoff between token-based and permission-based. It does not utilize any token, that is similar to permission-based, and the latest node having the critical section influences the entrance of the next node to the critical section, that is similar to token-based algorithms. The algorithm based on ancestral is named as DAD algorithms and increases the availability of fully connected network between 2.86 to 59.83% and decreases the number of message transmissions from 4j-2 to 3j messages (j as number of nodes in partition). This method is then utilized as the basis of dynamic ancestral mutual exclusion algorithm for MANET which is named as MDA. This algorithm is presented and evaluated for different scenarios of mobility of nodes, failure, load and number of nodes. The results of study show that MDA algorithm guarantees mutual exclusion,dead lock freedom and starvation freedom. It improves the availability of CS to minimum 154.94% and 113.36% for low load and high load of CS requests respectively compared to other permission-based lgorithm.Furthermore, it improves response time up to 90.69% for high load and 75.21% for low load of CS requests. It degrades the number of messages from n to 2 messages in the best case and from 3n/2 to n in the worst case. MDA algorithm is resilient to transient partitioning of network that is normally occurs due to failure of nodes or links

    Building a generalized distributed system model

    Get PDF
    The key elements in the second year (1991-92) of our project are: (1) implementation of the distributed system prototype; (2) successful passing of the candidacy examination and a PhD proposal acceptance by the funded student; (3) design of storage efficient schemes for replicated distributed systems; and (4) modeling of gracefully degrading reliable computing systems. In the third year of the project (1992-93), we propose to: (1) complete the testing of the prototype; (2) enhance the functionality of the modules by enabling the experimentation with more complex protocols; (3) use the prototype to verify the theoretically predicted performance of locking protocols, etc.; and (4) work on issues related to real-time distributed systems. This should result in efficient protocols for these systems

    A Holistic Approach to Lowering Latency in Geo-distributed Web Applications

    Get PDF
    User perceived end-to-end latency of web applications have a huge impact on the revenue for many businesses. The end-to-end latency of web applications is impacted by: (i) User to Application server (front-end) latency which includes downloading and parsing web pages, retrieving further objects requested by javascript executions; and (ii) Application and storage server(back-end) latency which includes retrieving meta-data required for an initial rendering, and subsequent content based on user actions. Improving the user-perceived performance of web applications is challenging, given their complex operating environments involving user-facing web servers, content distribution network (CDN) servers, multi-tiered application servers, and storage servers. Further, the application and storage servers are often deployed on multi-tenant cloud platforms that show high performance variability. While many novel approaches like SPDY and geo-replicated datastores have been developed to improve their performance, many of these solutions are specific to certain layers, and may have different impact on user-perceived performance. The primary goal of this thesis is to address the above challenges in a holistic manner, focusing specifically on improving the end-to-end latency of geo-distributed multi-tiered web applications. This thesis makes the following contributions: (i) First, it reduces user-facing latency by helping CDNs identify and map objects that are more critical for page-load latency to the faster CDN cache layers. Through controlled experiments on real-world web pages, we show the potential of our approach to reduce hundreds of milliseconds in latency without affecting overall CDN miss rates. (ii) Next, it reduces back-end latency by optimally adapting the datastore replication policies (including number and location of replicas) to the heterogeneity in workloads. We show the benefits of our replication models using real-world traces of Twitter, Wikipedia and Gowalla on a 8 datacenter Cassandra cluster deployed on EC2. (iii) Finally, it makes multi-tier applications resilient to the inherent performance variability in the cloud through fine-grained request redirection. We highlight the benefits of our approach by deploying three real-world applications on commercial cloud platforms

    Resilient Threat-Adaptive Consensus

    Get PDF
    Malicious and coordinated attacks are happening increasingly often, and have targeted critical systems such as nuclear plants, public transportation systems, hospitals and governments. Because critical infrastructures must be resilient against advanced and persistent threats, a common architecture of choice to mitigate those hazards are distributed systems, more specifically Byzantine fault-tolerant statemachine replicated(BFT-SMR) systems. In this PhD thesis, we propose solutions to critical challenges in the field of distributed systems, focusing on creating adaptive algorithms and protocols to strengthen the resilience state-of-the-art systems. The first challenge is how to ensure the security and reliability of critical infrastructures against advanced and persistent attacks at various threat levels. To address this, we present ThreatAdaptive, a novel BFT-SMR protocol that automatically adapts to changes in the anticipated and observed threats in an unattended manner. ThreatAdaptive proactively reconfigures the system to cope with the faults that one needs to expect given the imminent threats. It threreby avoids the limitations of traditional BFT-SMR protocols that require either by design a high fault threshold or a trusted external reconfiguration entity. Our results show that ThreatAdaptive meets the latency and throughput of BFT baselines while adapting 30% faster than previous methods, providing a more efficient and secure solution for critical infrastructures. The second challenge is how to optimize the performance of a distributed system in the presence of unreliable nodes. To address this, we propose a method for automatic reconfiguration based on a 3D virtual coordinate system (VCS) that allows correct nodes to detect and eliminate inconsistent latencies and protect system performance against Byzantine attacks. We evaluate our reconfiguration baseline, Geometric, on three real-world networking datasets and show that it protects performance up to 78% better than previous solutions and provides the closest representation of real-world connections. Our proposed solutions provide a more reliable and secure approach to automatic reconfiguration in distributed systems. Overall, this thesis makes a significant contribution to the field of distributed systems by proposing novel solutions to two critical challenges: ensuring the security and reliability of critical infrastructures and optimizing the performance of distributed systems in the presence of unreliable nodes

    On the Importance of Infrastructure-Awareness in Large-Scale Distributed Storage Systems

    Get PDF
    Big data applications put significant latency and throughput demands on distributed storage systems. Meeting these demands requires storage systems to use a significant amount of infrastructure resources, such as network capacity and storage devices. Resource demands largely depend on the workloads and can vary significantly over time. Moreover, demand hotspots can move rapidly between different infrastructure locations. Existing storage systems are largely infrastructure-oblivious as they are designed to support a broad range of hardware and deployment scenarios. Most only use basic configuration information about the infrastructure to make important placement and routing decisions. In the case of cloud-based storage systems, cloud services have their own infrastructure-specific limitations, such as minimum request sizes and maximum number of concurrent requests. By ignoring infrastructure-specific details, these storage systems are unable to react to resource demand changes and may have additional inefficiencies from performing redundant network operations. As a result, provisioning enough resources for these systems to address all possible workloads and scenarios would be cost prohibitive. This thesis studies the performance problems in commonly used distributed storage systems and introduces novel infrastructure-aware design methods to improve their performance. First, it addresses the problem of slow reads due to network congestion that is induced by disjoint replica and path selection. Selecting a read replica separately from the network path can perform poorly if all paths to the pre-selected endpoints are congested. Second, this thesis looks at scalability limitations of consensus protocols that are commonly used in geo-distributed key value stores and distributed ledgers. Due to their network-oblivious designs, existing protocols redundantly communicate over highly oversubscribed WAN links, which poorly utilize network resources and limits consistent replication at large scale. Finally, this thesis addresses the need for a cloud-specific realtime storage system for capital market use cases. Public cloud infrastructures provide feature-rich and cost-effective storage services. However, existing realtime timeseries databases are not built to take advantage of cloud storage services. Therefore, they do not effectively utilize cloud services to provide high performance while minimizing deployment cost. This thesis presents three systems that address these problems by using infrastructure-aware design methods. Our performance evaluation of these systems shows that infrastructure-aware design is highly effective in improving the performance of large scale distributed storage systems

    Cost- and workload-driven data management in the cloud

    Get PDF
    This thesis deals with the challenge of finding the right balance between consistency, availability, latency and costs, captured by the CAP/PACELC trade-offs, in the context of distributed data management in the Cloud. At the core of this work, cost and workload-driven data management protocols, called CCQ protocols, are developed. First, this includes the development of C3, which is an adaptive consistency protocol that is able to adjust consistency at runtime by considering consistency and inconsistency costs. Second, the development of Cumulus, an adaptive data partitioning protocol, that can adapt partitions by considering the application workload so that expensive distributed transactions are minimized or avoided. And third, the development of QuAD, a quorum-based replication protocol, that constructs the quorums in such a way so that, given a set of constraints, the best possible performance is achieved. The behavior of each CCQ protocol is steered by a cost model, which aims at reducing the costs and overhead for providing the desired data management guarantees. The CCQ protocols are able to continuously assess their behavior, and if necessary to adapt the behavior at runtime based on application workload and the cost model. This property is crucial for applications deployed in the Cloud, as they are characterized by a highly dynamic workload, and high scalability and availability demands. The dynamic adaptation of the behavior at runtime does not come for free, and may generate considerable overhead that might outweigh the gain of adaptation. The CCQ cost models incorporate a control mechanism, which aims at avoiding expensive and unnecessary adaptations, which do not provide any benefits to applications. The adaptation is a distributed activity that requires coordination between the sites in a distributed database system. The CCQ protocols implement safe online adaptation approaches, which exploit the properties of 2PC and 2PL to ensure that all sites behave in accordance with the cost model, even in the presence of arbitrary failures. It is crucial to guarantee a globally consistent view of the behavior, as in contrary the effects of the cost models are nullified. The presented protocols are implemented as part of a prototypical database system. Their modular architecture allows for a seamless extension of the optimization capabilities at any level of their implementation. Finally, the protocols are quantitatively evaluated in a series of experiments executed in a real Cloud environment. The results show their feasibility and ability to reduce application costs, and to dynamically adjust the behavior at runtime without violating their correctness

    Virginia Commonwealth University Professional Bulletin

    Get PDF
    Professional programs bulletin for Virginia Commonwealth University for the academic year 2018-2019. It includes information on academic regulations, degree requirements, course offerings, faculty, academic calendar, and tuition and expenses for graduate programs
    corecore