24 research outputs found

    Design of deadlock detection and prevention algorithms in distributed systems

    Full text link
    A distributed system consists of a collection of processes which communicate with each other by exchanging messages to achieve a common goal. One of the key problems in distributed systems is the possibility of deadlock. Processes are said to be deadlocked when some processes are blocked on resource requests that can never be satisfied unless drastic systems action is taken. Two distributed deadlock detection algorithms handling multiple outstanding requests is proposed and are proven to be correct: it detects all cycles and does not detect false deadlocks. The algorithms are based on the concept of chasing the edge of the waitfor graph (probe-based). Simulation results show that the proposed algorithm performs very well compared to some existing algorithms. A deadlock prevention algorithm based on the notion of coloring the nodes of the waitfor graph is also proposed. Rollback is quite less compared to some existing algorithms

    UPC-CHECK: A scalable tool for detecting run-time errors in Unified Parallel C

    Get PDF
    Unied Parallel C (UPC) is a language used to write parallel programs for shared and distributed memory parallel computers. UPC-CHECK is a scalable tool developed to automatically detect argument errors in UPC functions and deadlocks in UPC programs at run-time and issue high quality error messages to help programmers quickly x those errors. The tool is easy to use and involves merely replacing the compiler command with upc-check. The tool uses a novel distributed algorithm for detecting argument and deadlock errors in collective operations. The run-time complexity of the algorithm has been proven to be O(1). The algorithm has been extended to detect deadlocks created involving locks with a run-time complexity of O(T), where T is the number of threads waiting to acquire a lock. Error messages issued by UPC-CHECK were evaluated using the UPC RTED test suite for argument errors in UPC functions and deadlocks. Results of these tests show that the error messages issued by UPC-CHECK for these tests are excellent. The scalability of all the algorithms used was demonstrated using performance-evaluation test programs and the UPC NAS Parallel Benchmarks

    A distribute deadlock detection and resolution algorithm using agents

    Get PDF
    Deadlock is an intrinsic bottleneck in Distributed Real-Time Database Systems (DRTDBS). Deadlock detection and resolution algorithms are important because in DRTDBS, deadlocked transactions are prone to missing deadlines. We propose an Agent Deadlock Detection and Resolution algorithm (ADCombine), a novel framework for distributed deadlock handling using stationary agents, to address the high overhead suffered by current agent-based algorithms. We test a combined deadlock detection and resolution algorithm that enables the Multi Agent System to adjust its execution based on the changing system load, and that selects its victim transactions more judiciously. We demonstrate the advantages of ADCombine over existing algorithms that use agents or traditional edge-chasing through simulation experiments that measure overhead and performance under a widely varying of experimental conditions.deadlockdistribute real-time database systemsdrtdbsalgorithmmulti agent syste

    Dynamic deadlock detection under the OR requirement model

    Get PDF
    Deadlock detection is one of the most discussed problems in the literature. Although several al- gorithms have been proposed, the problem is still open. In general, the correct operation of an algorithm depends on the requirement model being considered. This article introduces a deadlock detection algorithm for the OR model. The algorithm is complete, because it detects all deadlocks, and it is correct, because it does not detect false deadlocks. In addition, the algorithm supports dynamic changes in the wait-for graph on which it works. Once finalized the algorithm, at least each process that causes deadlock knows that it is deadlocked. Using this property, possible extensions are suggested in order to resolve deadlocks.Facultad de Informátic

    Runtime MPI Correctness Checking with a Scalable Tools Infrastructure

    Get PDF
    Increasing computational demand of simulations motivates the use of parallel computing systems. At the same time, this parallelism poses challenges to application developers. The Message Passing Interface (MPI) is a de-facto standard for distributed memory programming in high performance computing. However, its use also enables complex parallel programing errors such as races, communication errors, and deadlocks. Automatic tools can assist application developers in the detection and removal of such errors. This thesis considers tools that detect such errors during an application run and advances them towards a combination of both precise checks (neither false positives nor false negatives) and scalability. This includes novel hierarchical checks that provide scalability, as well as a formal basis for a distributed deadlock detection approach. At the same time, the development of parallel runtime tools is challenging and time consuming, especially if scalability and portability are key design goals. Current tool development projects often create similar tool components, while component reuse remains low. To provide a perspective towards more efficient tool development, which simplifies scalable implementations, component reuse, and tool integration, this thesis proposes an abstraction for a parallel tools infrastructure along with a prototype implementation. This abstraction overcomes the use of multiple interfaces for different types of tool functionality, which limit flexible component reuse. Thus, this thesis advances runtime error detection tools and uses their redesign and their increased scalability requirements to apply and evaluate a novel tool infrastructure abstraction. The new abstraction ultimately allows developers to focus on their tool functionality, rather than on developing or integrating common tool components. The use of such an abstraction in wide ranges of parallel runtime tool development projects could greatly increase component reuse. Thus, decreasing tool development time and cost. An application study with up to 16,384 application processes demonstrates the applicability of both the proposed runtime correctness concepts and of the proposed tools infrastructure

    Global state predicates in rough real-time

    Get PDF
    Distributed systems are characterized by the fact that the constituent processes have neither common memory nor a common system clock. These processes communicate solely via message passing. While providing a number of benefits such as increased reliability, increased computational power, and geographic dispersion, this architecture significantly complicates many of the tasks of software development and verification, including evaluation of the program state. In the case of distributed systems, the program state is comprised of the local states of the constituent processes, as well as the state of the channels between processes, and is called the global state.;With no common system clock, many distributed system protocols rely on the global ordering of local process events imposed by the message passing that occurs between processes. This leads to a partial global ordering of local process events, which can then be used to determine which process states could (or could not) have occurred simultaneously.;Traditional predicate evaluation protocols evaluate predicates on the global state of a distributed computation using consistent global states. This evaluation is complicated by the fact that the event ordering imposed by message passing is only partial. A complete history of the global states that occurred during an execution cannot always be constructed. This introduces inefficiency into predicate detection protocols and prohibits detection of certain predicates.;This dissertation explores the use of this rough global time base for global state predicate evaluation within distributed systems. By structuring the evaluation on the assumption that a global time base exists, we can develop simple and efficient protocols for both stable and unstable predicate evaluation. Further, we can evaluate certain predicates which are not easily evaluated using consistent global states. We demonstrate these advantages by developing protocols for detection of distributed termination, distributed deadlock detection, and detection of certain unstable predicates as they occur. as the global time base is rough, we can only detect unstable predicates which remain true for a sufficient duration. We additionally develop several formalizations which assist the protocol developer in dealing with the fact that the global time base is not perfect. We demonstrate the application of these formalizations within the protocols that we develop

    Distributed deadlock detection in Concurrent C

    Get PDF
    Call number: LD2668 .T4 CMSC 1988 H36Master of ScienceComputing and Information Science

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead

    Processing real-time transactions in a replicated database system

    Get PDF
    A database system supporting a real-time application has to provide real-time information to the executing transactions. Each real-time transaction is associated with a timing constraint, typically in the form of a deadline. It is difficult to satisfy all timing constraints due to the consistency requirements of the underlying database. In scheduling the transactions it is aimed to process as many transactions as possible within their deadlines. Replicated database systems possess desirable features for real-time applications, such as a high level of data availability, and potentially improved response time for queries. On the other hand, multiple copy updates lead to a considerable overhead due to the communication required among the data sites holding the copies. In this paper, we investigate the impact of storing multiple copies of data on satisfying the timing constraints of real-time transactions. A detailed performance model of a distributed database system is employed in evaluating the effects of various workload parameters and design alternatives on the system performance. The performance is expressed in terms of the fraction of satisfied transaction deadlines. A comparison of several real-time concurrency control protocols, which are based on different approaches in involving timing constraints of transactions in scheduling, is also provided in performance experiments. © 1994 Kluwer Academic Publishers
    corecore