93,160 research outputs found

    A Peer-to-Peer Middleware Framework for Resilient Persistent Programming

    Get PDF
    The persistent programming systems of the 1980s offered a programming model that integrated computation and long-term storage. In these systems, reliable applications could be engineered without requiring the programmer to write translation code to manage the transfer of data to and from non-volatile storage. More importantly, it simplified the programmer's conceptual model of an application, and avoided the many coherency problems that result from multiple cached copies of the same information. Although technically innovative, persistent languages were not widely adopted, perhaps due in part to their closed-world model. Each persistent store was located on a single host, and there were no flexible mechanisms for communication or transfer of data between separate stores. Here we re-open the work on persistence and combine it with modern peer-to-peer techniques in order to provide support for orthogonal persistence in resilient and potentially long-running distributed applications. Our vision is of an infrastructure within which an application can be developed and distributed with minimal modification, whereupon the application becomes resilient to certain failure modes. If a node, or the connection to it, fails during execution of the application, the objects are re-instantiated from distributed replicas, without their reference holders being aware of the failure. Furthermore, we believe that this can be achieved within a spectrum of application programmer intervention, ranging from minimal to totally prescriptive, as desired. The same mechanisms encompass an orthogonally persistent programming model. We outline our approach to implementing this vision, and describe current progress.Comment: Submitted to EuroSys 200

    Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

    Get PDF
    A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language

    Brief Announcement: Reaching Approximate Consensus When Everyone May Crash

    Get PDF
    Fault-tolerant consensus is of great importance in distributed systems. This paper studies the asynchronous approximate consensus problem in the crash-recovery model with fair-loss links. In our model, up to f nodes may crash forever, while the rest may crash intermittently. Each node is equipped with a limited-size persistent storage that does not lose data when crashed. We present an algorithm that only stores three values in persistent storage - state, phase index, and a counter

    CAPre: Code-Analysis based Prefetching for Persistent object stores

    Get PDF
    Data prefetching aims to improve access times to data storage systems by predicting data records that are likely to be accessed by subsequent requests and retrieving them into a memory cache before they are needed. In the case of Persistent Object Stores, previous approaches to prefetching have been based on predictions made through analysis of the store’s schema, which generates rigid predictions, or monitoring access patterns to the store while applications are executed, which introduces memory and/or computation overhead. In this paper, we present CAPre, a novel prefetching system for Persistent Object Stores based on static code analysis of object-oriented applications. CAPre generates the predictions at compile-time and does not introduce any overhead to the application execution. Moreover, CAPre is able to predict large amounts of objects that will be accessed in the near future, thus enabling the object store to perform parallel prefetching if the objects are distributed, in a much more aggressive way than in schema-based prediction algorithms. We integrate CAPre into a distributed Persistent Object Store and run a series of experiments that show that it can reduce the execution time of applications from 9% to over 50%, depending on the nature of the application and its persistent data model.This work has been supported by the European Union’s Horizon 2020 research and innovation program under the BigStorage European Training Network (ETN) (grant H2020-MSCA-ITN-2014- 642963), the Spanish Ministry of Science and Innovation (contract TIN2015-65316) and the Generalitat de Catalunya, Spain (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Limited Copies and Leased References for Distributed Persistent Objects

    Get PDF
    As businesses become global organisations and as e-commerce opens up markets to customers across the Internet, demand grows for increasingly ambitious distributed software applications and platforms. Where these applications run over potentially huge collections of data, sophisticated management of data storage and communication is required. There is a need for well-integrated persistence and distribution support that considers the implications for long-term maintenance of valuable persistent data. Orthogonal persistence is intended to ease the programmer's job by providing support for data management that is integrated with a programming language. The simplicity of the orthogonal persistence model argues for its use in distributed systems, in order to make life simpler for the application programmer. PJRMI is an implementation of Java RMI for the orthogonally-persistent PJama platform. This dissertation addresses two problem areas raised by combining orthogonal persistence with support for distributed applications. These problem areas are illustrated by PJRMI. The first problem is raised as a consequence of attempting to provide the illusion of a persistent connection between stores. Distribution-related errors easily break this illusion. In an open system, it can be difficult to determine when an object should become persistent by remote reachability. In the long term, persistent references to remote objects threaten the maintainability of the persistent stores involved. A solution has been implemented to address the problems raised by maintaining persistent references between distributed stores. Greater autonomy of individual stores is achieved by limiting remote access to objects to a duration of time associated with a specific distributed application's lifetime. Within the application's lifetime, the benefits are retained of persistence of inter-store references for resilience. The second problem is encountered when copying object graphs between stores. Large object graphs tend to build up in persistent stores over time. Copying such large object graphs can be prohibitively expensive in terms of resources and performance. A programmer may assume that the size of graph they are copying is acceptable, based on their knowledge of a system in its infancy. However, the problem is that, in a long-lived system, their assumptions may be challenged, since the size of an object graph and the context in which it is used are more likely to change during a persistent object graph's lifetime. The combination of a typically statically-defined policy for passing objects to remote sites and programmer assumptions that fail to take into account the lifetime of an object can also result in other problems. These problems include failure to support different requirements on remote use of the same object graph by different applications during that object graph's lifetime. A solution has been implemented to address the problems raised by remote copying of large object graphs. Flexibility of control over such copying is achieved. Separation of policy from object definition ensures flexibility. Choice of object-copying policy for a specific distributed application's lifetime provides control, while ensuring it is adaptable to changes in size of persistent object graphs over their lifetime and to changes in the context in which these graphs are used

    Heterogeneous Indexing Register for Object Database

    Get PDF
    Even though the object oriented persistent stores has not gained large commercial adaptation rate, it still is an interesting research field in many aspects including the data integration. Persistent data integration is a very challenging goal in modern computer systems. This paper presents a proposal for application of effective indexing integration scheme for distributed and heterogeneous data environment using an object database as the central store

    A Generic Storage API

    Get PDF
    We present a generic API suitable for provision of highly generic storage facilities that can be tailored to produce various individually customised storage infrastructures. The paper identifies a candidate set of minimal storage system building blocks, which are sufficiently simple to avoid encapsulating policy where it cannot be customised by applications, and composable to build highly flexible storage architectures. Four main generic components are defined: the store, the namer, the caster and the interpreter. It is hypothesised that these are sufficiently general that they could act as building blocks for any information storage and retrieval system. The essential characteristics of each are defined by an interface, which may be implemented by multiple implementing classes.Comment: Submitted to ACSC 200
    corecore