1,306 research outputs found

    On Improving the Robustness of Partitionable Internet-Based Mobile Ad Hoc Networks

    Get PDF
    Recent technological advances in portability, mobility support, and high speed wireless communications and users' insatiable interest in accessing the Internet have fueled to development of mobile wireless networks. Internet-based mobile ad hoc network (IMANET) is emerging as a ubiquitous communication infrastructure that combines a mobile ad hoc network (MANET) and the Internet to provide universal information accessibility. However, communication performance may be seriously degraded by network partitions resulted from frequent changes of the network topology. In this paper, we propose an enhanced least recently used replacement policy as a part of the aggregate cache mechanism to improve the information accessibility and reduce the access latency in the presence of network partitioning. The enhanced aggregate cache is analyzed and also evaluated by simulation. Extensive simulation experiments are conducted under various network topologies by using three different mobility models: random waypoint, Manhattan grid, and mo -di -fied random waypoint. The simulation results indicate that the proposed policy significantly improves communication performance in varying network topologies, and relieves the network partition problem to a great extent

    Cache Invalidation Strategies for Internet-based Vehicular Ad Hoc Networks

    Get PDF
    Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes

    Cache Invalidation Strategies for Internet-based Vehicular Ad Hoc Networks

    Get PDF
    Internet-based vehicular ad hoc network (Ivanet) is an emerging technique that combines a wired Internet and a vehicular ad hoc network (Vanet) for developing an ubiquitous communication infrastructure and improving universal information and service accessibility. A key design optimization technique in Ivanets is to cache the frequently accessed data items in a local storage of vehicles. Since vehicles are not critically limited by the storage/memory space and power consumption, selecting proper data items for caching is not very critical. Rather, an important design issue is how to keep the cached copies valid when the original data items are updated. This is essential to provide fast access to valid data for fast moving vehicles. In this paper, we propose a cooperative cache invalidation (CCI) scheme and its enhancement (ECCI) that take advantage of the underlying location management scheme to reduce the number of broadcast operations and the corresponding query delay. We develop an analytical model for CCI and ECCI techniques for fasthand estimate of performance trends and critical design parameters. Then, we modify two prior cache invalidation techniques to work in Ivanets: a poll-each-read (PER) scheme, and an extended asynchronous (EAS) scheme. We compare the performance of four cache invalidation schemes as a function of query interval, cache update interval, and data size through extensive simulation. Our simulation results indicate that the proposed schemes can reduce the query delay up to 69% and increase the cache hit rate up to 57%, and have the lowest communication overhead compared to the prior PER and EAS schemes

    Introducing a Data Sliding Mechanism for Cooperative Caching in Manycore Architectures

    Get PDF
    International audienceIn this paper, we propose a new cooperative caching method improving the cache miss rate for manycore micro- architec- tures. The work is motivated by some limitations of recent adaptive cooperative caching proposals. Elastic Cooperative caching (ECC), is a dynamic memory partitioning mechanism that allows sharing cache across cooperative nodes according to the application behavior. However, it is mainly limited with cache eviction rate in case of highly stressed neighbor- hood. Another system, the adaptive Set-Granular Cooperative Caching (ASCC), is based on finer set-based mechanisms for a better adaptability. However, heavy localized cache loads are not efficiently managed. In such a context, we propose a cooperative caching strategy that consists in sliding data through closer neighbors. When a cache receives a storing request of a neighbor's private block, it spills the least recently used private data to a close neighbor. Thus, solicited saturated nodes slide local blocks to their respective neighbors to always provide free cache space. We also propose a new Priority- based Data Replacement policy to decide efficiently which blocks should be spilled, and a new mechanism to choose host destination called Best Neighbor selector. The first analytic performance evaluation shows that the proposed cache management policies reduce by half the average global communication rate. As frequent accesses are focused in the neighboring zones, it efficiently improves on-Chip traffic. Finally, our evaluation shows that cache miss rate is en- hanced: each tile keeps the most frequently accessed data 1- Hop close to it, instead of ejecting them Off-Chip. Proposed techniques notably reduce the cache miss rate in case of high solicitation of the cooperative zone, as it is shown in the performed experiments
    corecore