12 research outputs found

    Hierarchical Replication Control

    Full text link
    We present a hierarchical locking algorithm that dynamically elects a primary server in a replicated file system at various granularities. We introduce two lock types: shallow locks that control a single file or directory, and deep locks that lock everything in the subtree rooted at a directory. Experimental results show that for typical use cases, deep locks can make the overhead of replication control negligible, even when replication servers are widely distributed.http://deepblue.lib.umich.edu/bitstream/2027.42/107961/1/citi-tr-06-3.pd

    A replicated file system for Grid computing

    Full text link
    To meet the rigorous demands of large-scale data sharing in global collaborations, we present a replication scheme for NFSv4 that supports mutable replication without sacrificing strong consistency guarantees. Experimental evaluation indicates a substantial performance advantage over a single-server system. With the introduction of a hierarchical replication control protocol, the overhead of replication is negligible even when applications mostly write and replication servers are widely distributed. Evaluation with the NAS Grid Benchmarks demonstrates that our system provides comparable and often better performance than GridFTP, the de facto standard for Grid data sharing. Copyright © 2008 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/60228/1/1286_ftp.pd

    Scalable techniques for memory-efficient CDN simulations

    Get PDF

    MFS: an Adaptive Distributed File System for Mobile Hosts

    Get PDF
    Mobility is a critical feature of computer systems, and while wireless networks are common, most applications that run on mobile hosts lack flexible mechanisms for data access in an environment with large and frequent variations in network connectivity. Such conditions arise, for example, in collaborative work applications, particularly when wireless and wired users share files or databases. In this paper, we describe some techniques for adapting data access to network variability in the context of MFS, a client cache manager for a distributed file system. We show how MFS is able to adapt to widely varying bandwidth levels through the use of modeless adaptation, and evaluate the benefit of mechanisms for improving file system performance and cache consistency using microbenchmarks and file system traces

    The Effectiveness of Cache Coherence Implemented on the Web

    Get PDF
    The popularity of the World Wide Web (Web) has generated so much network traffic that it has increased concerns as to how the Internet will scale to meet future demand. The increased population of users and the large size of files being transmitted have resulted in concerns for different types of Internet users. Server administrators want a manageable load on their servers. Network administrators need to eliminate unnecessary traffic, thereby allowing more bandwidth for useful information. End users desire faster document retrieval. Proxy caches decrease the number of messages that enter the network by satisfying requests before they reach the server. However, the use of proxies introduces a concern with how to maintain consistency among cached document versions. Existing consistency protocols used in the Web are proving to be insufficient to meet the growing needs of the World Wide Web population. For example, too many messages are due to caches guessing when their copy is inconsistent. One option is to apply the cache coherence strategies already in use for many other distributed systems, such as parallel computers. However, these methods are not satisfactory for the World Wide Web due to its larger size and range of users. This paper provides insight into the characteristics of document popularity and how often these popular documents change. The frequency of proxy accesses to documents is also studied to test the feasibility of providing coherence at the server. The main goal is to determine whether server invalidation is the most effective protocol to use on the Web today. We make recommendations based on how frequently documents change and are accessed

    Binary vote assignment on grid quorum replication technique with association rule

    Get PDF
    One of the biggest challenges that data grids users have to face today relates to the improvement of the data management. Organizations need to provide current data to users who may be geographically remote and to handle a volume of requests of data distributed around multiple sites in distributed environment. Therefore, the storage, availability, and consistency are important issues to be addressed to allow efficient and safe data access from many different sites. One way to effectively cope with these challenges is to rely on the replication technique. Replication is a useful technique for distributed database systems. Through this technique, a data can be accessed from multiple locations. Thus, replication increases data availability and accessibility to users. When one site fails, user still can access the same data at another site. Techniques such as Read-One-Write-All (ROWA), Hierarchical Replication Scheme (HRS) and Branch Replication Scheme (BRS) are the popular techniques being used for replication and data management. However, these techniques have its weaknesses in terms of communication costs that is the total replication servers needed to replicate the data. Furthermore, these techniques also do not consider the correlation between data during the fragmentation process. The knowledge about data correlation can be extracted from historical data using techniques of the data mining field. Without proper strategies, replication increases job execution time. In this research, the some-data-to-some-sites scheme called Binary Vote Assignment on Grid Quorum with Association (BV AGQAR) is proposed to manage replication for meaningful fragmented data in distributed database environment with low communication cost and processing time for a transaction. The main feature of BV AGQ-AR is that the technique integrates replication and data mining technique allowing meaningful extraction of knowledge from large data sets. Performance of the BVAGQ-AR technique comprised the following steps. First step is mining the data by using Apriori algorithm from Association Rules. It is used to discover the correlation between data. For the second step, the database is fragmented based on the data mining analysis results. This technique is executed to make sure data replication can be effectively done while saving cost. Then, the databases that are resulted after the fragmentation process are allocated at their assigned sites. Finally, after allocation process, each site has a database file and ready for any transaction and replication process. Finally, the result of the experiments shows that BV AGQ-AR can preserve the data consistency with the lowest communication cost and processing time for a transaction as compared to BCSA, PRA, ROW A, HRS and BRS

    Strong consistency in cache augmented SQL systems

    Full text link

    Scalable consistency maintenance in content distribution networks using cooperative leases

    Full text link

    Modeling and acceleration of content delivery in world wide web

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore