1,313 research outputs found

    Maintaining consistency in client-server database systems with client-side caching

    Get PDF
    PhD ThesisCaching has been used in client-server database systems to improve the performance of applications. Much of the current work has concentrated on caching techniques at the server side, since the underlying assumption has been that clients are “thin” with application level processing taking place mainly at the server side. There are also a new class of “thick client” applications where clients need to access the database at the server but also perform substantial amount of processing at the client side; here client-side caching is needed to provide good performance for applications. This thesis presents a transactional cache consistency scheme suitable for systems with client-side caching. The scheme is based on the optimistic approach to concurrency control. The scheme provides serializability for committed transactions. This is in contrast to many modern systems that only provide the snapshot isolation property which is weaker than serializability. A novel feature is that the processing load for validating transactions at commit time is shared between clients and the database server, thereby reducing the load at the server. Read-only transactions can be validated at the client-side, without communicating with the server. Another feature is that the scheme permits disconnected operation, allowing clients with cached objects to work offline. The performance of the scheme is evaluated using simulation experiments. The experiments demonstrate that for mostly read only transaction load – for which caching is most effective - the scheme outperforms the existing concurrency control scheme with client-side caching considered to be the best, and matches the performance of the widely used scheme that only provides snapshot isolation. The results also show that the scheme in a disconnected environment provides reasonable performance.Directorate General of Higher Education, Ministry of National Education, Indonesia

    Reducing cross-domain call overhead using batched futures

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 95-96).by Phillip Lee Bogle.M.S

    Issues in building mobile-aware applications with the Rover Toolkit

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 69-73).by Joshua A. Tauber.M.S

    Khazana An infrastructure for building distributed services

    Get PDF
    technical reportEssentially all distributed systems?? applications?? and services at some level boil down to the problem of man aging distributed shared state Unfortunately?? while the problem of managing distributed shared state is shared by many applications?? there is no common means of managing the data every application devises its own solution We have developed Khazana?? a distributed service exporting the abstraction of a distributed per sistent globally shared store that applications can use to store their shared state Khazana is responsible for performing many of the common operations needed by distributed applications?? including replication?? consis tency management?? fault recovery?? access control?? and location management Using Khazana as a form of middleware?? distributed applications can be quickly de veloped from corresponding uniprocessor applications through the insertion of Khazana data access and syn chronization operation

    Community next steps for making globally unique identifiers work for biocollections data

    Get PDF
    Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided

    Garbage collection in a large, distributed object store

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 93-97).by Umesh Maheshwari.Ph.D

    Models of higher-order, type-safe, distributed computation over autonomous persistent object stores

    Get PDF
    A remote procedure call (RPC) mechanism permits the calling of procedures in another address space. RPC is a simple but highly effective mechanism for interprocess communication and enjoys nowadays a great popularity as a tool for building distributed applications. This popularity is partly a result of their overall simplicity but also partly a consequence of more than 20 years of research in transpaxent distribution that have failed to deliver systems that meet the expectations of real-world application programmers. During the same 20 years, persistent systems have proved their suitability for building complex database applications by seamlessly integrating features traditionally found in database management systems into the programming language itself. Some research. effort has been invested on distributed persistent systems, but the outcomes commonly suffer from the same problems found with transparent distribution. In this thesis I claim that a higher-order persistent RPC is useful for building distributed persistent applications. The proposed mechanism is: realistic in the sense that it uses current technology and tolerates partial failures; understandable by application programmers; and general to support the development of many classes of distributed persistent applications. In order to demonstrate the validity of these claims, I propose and have implemented three models for distributed higher-order computation over autonomous persistent stores. Each model has successively exposed new problems which have then been overcome by the next model. Together, the three models provide a general yet simple higher-order persistent RPC that is able to operate in realistic environments with partial failures. The real strength of this thesis is the demonstration of realism and simplicity. A higherorder persistent RPC was not only implemented but also used by programmers without experience of programming distributed applications. Furthermore, a distributed persistent application has been built using these models which would not have been feasible with a traditional (non-persistent) programming language

    A spatiotemporal object-oriented data model for landslides (LOOM)

    Get PDF
    LOOM (landslide object-oriented model) is here presented as a data structure for landslide inventories based on the object-oriented paradigm. It aims at the effective storage, in a single dataset, of the complex spatial and temporal relations between landslides recorded and mapped in an area and at their manipulation. Spatial relations are handled through a hierarchical classification based on topological rules and two levels of aggregation are defined: (i) landslide complexes, grouping spatially connected landslides of the same type, and (ii) landslide systems, merging landslides of any type sharing a spatial connection. For the aggregation procedure, a minimal functional interaction between landslide objects has been defined as a spatial overlap between objects. Temporal characterization of landslides is achieved by assigning to each object an exact date or a time range for its occurrence, integrating both the time frame and the event-based approaches. The sum of spatial integrity and temporal characterization ensures the storage of vertical relations between landslides, so that the superimposition of events can be easily retrieved querying the temporal dataset. The here proposed methodology for landslides inventorying has been tested on selected case studies in the Cilento UNESCO Global Geopark (Italy). We demonstrate that the proposed LOOM model avoids data fragmentation or redundancy and topological inconsistency between the digital data and the real-world features. This application revealed to be powerful for the reconstruction of the gravity-induced deformation history of hillslopes, thus for the prediction of their evolution

    Zephyr extensibility in small workstation oriented computer networks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references.by Jason T. Hunter.M.Eng
    • …
    corecore