39 research outputs found
Prospects of caching in a distributed digital library
Many independent publishers are today offering digital libraries with fulltext archives. In an attempt to provide a single user-interface to a large set of archives, DTVs Article Database Service, offers a consolidatedinterface to a geographically distributed set of archives. While this approach offers a tremendous functional advantage to a user, the delays caused by the network and queuing delays in servers make the user-perceived interactive performance poor. In this paper, we study the prospects of caching articles at the client level as wel as intermediate points as manifested by gateways that implement the interfaces to the many fulltext archives. A central research question is what the nature of the locality is in the user accesses to such a digital library. Based on access logs to drive simulations, we find that client side caching can result in a 20% hitrate. However, at the gateway level, where multiple users may access the same article, the temporal locality is poor and caching is not so relevant. We have also studied whether spatial locality can be exploited by considering to load into cache all articles in an issue, volume, or journal, if a single article is accessed, but found that spatial locality is quite poor
Using Hoarding to Increase the Availability in Shared File Systems
Many mobile devices have reached the point where the users' (active) working set is smaller than the amount of storage available and that trend is likely to continue. Currently these resources are made available for recording new data, but we think that we could make better use of this capacity. Hoarding previously not accessed data could give better data coverage in cases of disconnected operation, when wireless networks are not available or access to them is expensive.We gathered a trace from a university file system used by more than 5000 people over a period of 16 months. This trace is used to drive a simulation model of distributed file systems. This paper studies a novel hoarding scheme that uses the access profile of other users to predict what files a user would need in the future. This hoarding scheme is shown to avoid between 30% and 75% of remote file accesses to files that are accessed for the first time. Furthermore, hoarded but not used data can be expired, because we note experimentally that the population shifts focus each month
Using Hoarding to Increase the Availability in Shared File Systems
Many mobile devices have reached the point where the users\u27 (active) working set is smaller than the amount of storage available and that trend is likely to continue. Currently these resources are made available for recording new data, but we think that we could make better use of this capacity. Hoarding previously not accessed data could give better data coverage in cases of disconnected operation, when wireless networks are not available or access to them is expensive.We gathered a trace from a university file system used by more than 5000 people over a period of 16 months. This trace is used to drive a simulation model of distributed file systems. This paper studies a novel hoarding scheme that uses the access profile of other users to predict what files a user would need in the future. This hoarding scheme is shown to avoid between 30% and 75% of remote file accesses to files that are accessed for the first time. Furthermore, hoarded but not used data can be expired, because we note experimentally that the population shifts focus each month
Enhancing Garbage Collection Synchronization using Explicit Bit Barriers
Multicore architectures offer a convenient way to unlock concurrency between application (called mutator) and garbage collector, yet efficient synchronization between the two by means of barriers is critical to unlock this concurrency. Hardware Transactional Memory (HTM), now commercially available, opens up new ways for synchronization with dramatically lower overhead for the mutator. Unfortunately, HTM-based schemes proposed to date either require specialized hardware support or impose severe overhead through invocation of OS-level trap handlers. This paper proposes Explicit Bit Barriers (EBB), a novel approach for fast synchronization between the mutator and HTM-encapsulated relocation tasks. We compare the efficiency of EBBs with read barriers based on virtual memory that rely on OS-level trap handlers. We show that EBBs are nearly as efficient as those needing specialized hardware, but run on commodity Intel processors with TSX extensions