854 research outputs found

    A Cyclic Distributed Garbage Collector for Network Objects

    Get PDF
    This paper presents an algorithm for distributed garbage collection and outlines its implementation within the Network Objects system. The algorithm is based on a reference listing scheme, which is augmented by partial tracing in order to collect distributed garbage cycles. Processes may be dynamically organised into groups, according to appropriate heuristics, to reclaim distributed garbage cycles. The algorithm places no overhead on local collectors and suspends local mutators only briefly. Partial tracing of the distributed graph involves only objects thought to be part of a garbage cycle: no collaboration with other processes is required. The algorithm offers considerable flexibility, allowing expediency and fault-tolerance to be traded against completeness

    Mitigation of liveness attacks in DAG-based ledgers

    Full text link
    The robust construction of the ledger data structure is an essential ingredient for the safe operation of a distributed ledger. While in traditional linear blockchain systems, permission to append to the structure is leader-based, in Directed Acyclic Graph-based ledgers, the writing access can be organised leaderless. However, this leaderless approach relies on fair treatment of non-referenced blocks, i.e. tips, by honest block issuers. We study the impact of a deviation from the standard tip selection by a subset of block issuers with the aim of halting the confirmation of honest blocks entirely. We provide models on this so-called orphanage of blocks and validate these through open-sourced simulation studies. A critical threshold for the adversary issuance rate is shown to exist, above which the tip pool becomes unstable, while for values below the orphanage decrease exponentially. We study the robustness of the protocol with an expiration time on tips, also called garbage collection, and modification of the parent references per block.Comment: IEEE ICBC 202

    Garbage Collection for General Graphs

    Get PDF
    Garbage collection is moving from being a utility to a requirement of every modern programming language. With multi-core and distributed systems, most programs written recently are heavily multi-threaded and distributed. Distributed and multi-threaded programs are called concurrent programs. Manual memory management is cumbersome and difficult in concurrent programs. Concurrent programming is characterized by multiple independent processes/threads, communication between processes/threads, and uncertainty in the order of concurrent operations. The uncertainty in the order of operations makes manual memory management of concurrent programs difficult. A popular alternative to garbage collection in concurrent programs is to use smart pointers. Smart pointers can collect all garbage only if developer identifies cycles being created in the reference graph. Smart pointer usage does not guarantee protection from memory leaks unless cycle can be detected as process/thread create them. General garbage collectors, on the other hand, can avoid memory leaks, dangling pointers, and double deletion problems in any programming environment without help from the programmer. Concurrent programming is used in shared memory and distributed memory systems. State of the art shared memory systems use a single concurrent garbage collector thread that processes the reference graph. Distributed memory systems have very few complete garbage collection algorithms and those that exist use global barriers, are centralized and do not scale well. This thesis focuses on designing garbage collection algorithms for shared memory and distributed memory systems that satisfy the following properties: concurrent, parallel, scalable, localized (decentralized), low pause time, high promptness, no global synchronization, safe, complete, and operates in linear time

    Ensuring referential integrity under causal consistency

    Get PDF
    Referential integrity (RI) is an important correctness property of a shared, distributed object storage system. It is sometimes thought that enforcing RI requires a strong form of consistency. In this paper, we argue that causal consistency suffices to maintain RI. We support this argument with pseudocode for a reference CRDT data type that maintains RI under causal consistency. QuickCheck has not found any errors in the model

    Simulating Wde-area Replication

    Get PDF
    We describe our experiences with simulating replication algorithms for use in far flung distributed systems. The algorithms under scrutiny mimic epidemics. Epidemic algorithms seem to scale and adapt to change (such as varying replica sets) well. The loose consistency guarantees they make seem more useful in applications where availability strongly outweighs correctness; e.g., distributed name service

    Overview of Auditing Cloud Consistency

    Get PDF
    Cloud storage services have become very popular due to their infinite advantages. To provide always-on access, a cloud service provider (CSP) maintains multiple copies for each piece of data on geographically distributed servers. A major disadvantage of using this technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this system, a novel consistency as a service (CaaS) model is presented, which involves a large data cloud and many small audit clouds. In the CaaS model we are presented in our system, a data cloud is maintained by a CSP. A group of users that participate an audit cloud can verify whether the data cloud provides the promised level of consistency or not. The system proposes a two level auditing architecture, which need a loosely synchronize clock in the audit cloud. Then design algorithms to measure the severity of violations with two metrics: the commonality of violations, and the oldness value of read. Finally, heuristic auditing strategy (HAS) is devised to find out as many violations as possible. Many experiments were performed using a combination of simulations and a real cloud deployment to validate HAS. DOI: 10.17762/ijritcc2321-8169.15011

    Integrating Naming and Addressing of Persistent data in Programming Language and Operating System Contexts

    Get PDF
    There exist a number of desirable transparencies in distributed computing, viz., name transparency: having a uniform way of naming entities in the system, regardless of their type or physical make up; location transparency: having a uniform way of addressing entities, regardless of their physical location; representation transparency: having a uniform way of representing data, which simplifies sharing data between applications written in different highlevel languages and running on different hardware architectures (interoperability) and finally invocation transparency: having a uniform way of invoking operations on entities. The advent of persistency in programming language contexts has created a need for the integration of these four important concepts, viz., naming, addressing, representation and manipulation of data in programming language and operating system contexts. This paper attempts to address the first three transparencies, postponing the fourth to a later paper. First, we make up a list of things that are needed to construct a persistent programming environment and relate this list to existing persistent object models, revealing their inadequacies. We then describe a new model which merges programming language and operating system naming contexts into a global name space which, while enforcing uniformity through the use of globally unique names, still allows the application of personal nicknames. Furthermore, we explain how persistent data is stored and retrieved using a client/server model of interaction, and how it could be acted upon correctly, through the concept of typed data. We conclude by checking how well our model scores on the wish list, listing the current status and future directions for research
    corecore