569 research outputs found

    Data consistency for cooperative caching in mobile environments

    Get PDF
    2006-2007 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    Efficient Cache Invalidation in Mobile Environments

    Get PDF
    [[abstract]]In a mobile environment, caching data items at the mobile clients is important as it reduces the data access time and bandwidth utilization. While caching is desirable, it may cause data inconsistency between the server and the mobile clients if their communication is disconnected for a period of time. To ensure information coherence between the source items and their cached items, the server can broadcast invalidation reports to the mobile clients who then use the reports to update the cached data items. Cache invalidation is indeed an effective approach to maintaining such data coherence. This paper presents a new cache invalidation strategy which is shown through experimental evaluation to maintain data consistency between the server and mobile clients in a more efficient way than existing invalidation strategies.[[notice]]補正完畢[[incitationindex]]E

    ABMMCCS: Application based multi-level mobile cache consistency scheme

    Get PDF
    Maintaining cache consistency in mobile computing system is a critical issue due to the inheritance limitations in mobile environment such as limited network bandwidth and mobile device energy power.Most of the existing schemes maintaining mobile cache consistency support only one level of consistency that is either strict or weak which is not suitable all the time, as various mobile applications systems have different consistency requirements on their data.Also majority of the schemes restrict the using of cached data for reading only which is limits the functionality of the caching system.In this paper, a new scheme is proposed to maintain the mobile cache consistency in a single cell wireless network called Application Based Multi-Level Mobile Cache Consistency Scheme (ABMMCCS).The main idea in ABMMCCS is to be suitable to various real mobile application systems, by supporting multiple levels of consistency based on the application requirements, while savingthe mobile client energy power and reducing the consumption of the network bandwidth.The initial evaluation results show that, ABMMCCM reduces the number of uplink messages issued from the mobile client, which is assist in saving the mobile client energy and better utilizing the limited network bandwidth

    An Effective Service Mechanism to Achieve Low Query Latency along with reduced Negative Acknowledgement in iVANET: An Approach to Improve Quality of Service in iVANET

    Get PDF
    The Internet Based vehicular ad hoc network (iVANET) combines a wired Internet and vehicular ad hoc networks (VANETs) for developing a new generation of ubiquitous communicating. The Internet is usually applied in vehicle to infrastructure (V2I) solution whereas ad hoc networks are used in vehicle to vehicle (V2V) communication. Since vehicular networks is characterized by High speed dynamically changing network topology The latency is one of the hot issues in VANET which is proportional to the source-&-remote vehicle distance and the mechanism involved in accessing source memory. If the distance between data source and the remote vehicle is wittily reduced by using redefined caching technique along with certain cache lookup mechanism, the latency is likely to be reduced by a significant factor in iVANET. This paper studies and analyzes various cache invalidation schemes including state of art ones and come with a novel idea of fructifying network performance within the purview of query latency and negative acknowledgement in iVANET. In this paper the roles of the mediatory network component are redefined with associative service mechanism which guarantees reduced query latency as well as minimizes negative acknowledgements in iVANET environment

    Constructing Efficient Cache Invalidation Schemes in Mobile Environments

    Get PDF
    [[abstract]]Cache invalidation is an effective approach to maintaining data consistency between the server and mobile clients in a mobile environment. This paper presents two new cache invalidation schemes which are designed according to the real situations and are therefore able to comply with the more practical needs in a mobile environment. The ABI+HCQU divides data into different groups based on their utilization rates (hot/cold/query/update) and adapts their broadcasting intervals (ABI) accordingly to suit the actual needs. The SWRCC + MUVI (sleep/wakeup/recovery/check/confirm+modified/uncertain/ valid/ invalid) aims to solve the validity problem of cached data after a client is disconnected from the server. The new cache invalidation schemes are shown through experimental evaluation to outperform most existing schemes in terms of data access time, cache miss rates and bandwidth consumption.[[conferencetype]]國際[[conferencedate]]20071216~20071218[[iscallforpapers]]Y[[conferencelocation]]Shanghai, Chin

    An efficient cache invalidation strategy in mobile environments

    Get PDF
    [[abstract]]We present a new cache invalidation strategy able to maintain data consistency between the server and mobile clients in an efficient way in mobile communications.[[conferencetype]]國際[[conferencedate]]20040329~20040331[[iscallforpapers]]Y[[conferencelocation]]Fukuoka, Japa

    Inefficiencies in the Cache Hierarchy: A Sensitivity Study of Cacheline Size with Mobile Workloads

    Get PDF
    With the rising number of cores in mobile devices, the cache hierarchy in mobile application processors gets deeper, and the cache size gets bigger. However, the cacheline size remained relatively constant over the last decade in mobile application processors. In this work, we investigate whether the cacheline size in mobile application processors is due for a refresh, by looking at inefficiencies in the cache hierarchy which tend to be exacerbated when increasing the cacheline size: false sharing and cacheline utilization. Firstly, we look at false sharing, which is more likely to arise at larger cacheline sizes and can severely impact performance. False sharing occurs when non-shared data structures, mapped onto the same cacheline, are being accessed by threads running on different cores, causing avoidable invalidations and subsequent misses. False sharing has been found in various places such as scientific workloads and real applications. We find that whilst increasing the cacheline size does increase false sharing, it still is negligible when compared to known cases of false sharing in scientific workloads, due to the limited level of thread-level parallelism in mobile workloads. Secondly, we look at cacheline utilization which measures the number of bytes in a cacheline actually used by the processor. This effect has been investigated under various names for a multitude of server and desktop applications. As a low cacheline utilization implies that very little of the fetched cachelines was used by the processor, this causes waste in bandwidth and energy in moving data across the memory hierarchy. The energy cost associated with data movements is much higher compared to logic operations, increasing the need for cache efficiency, especially in the case of an energy-constrained platform like a mobile device. We find that the cacheline utilization of mobile workloads is low in general, decreasing when increasing the cacheline size. When increasing the cacheline size from 64 bytes to 128 bytes, the number of misses will be reduced by 10%-30%, depending on the workload. However, because of the low cacheline utilization, this more than doubles the amount of unused traffic to the L1 caches. Using the cacheline utilization as a metric in this way, illustrates an important point. If a change in cacheline size would only be assessed on its local effects, we find that this change in cacheline size will only have advantages as the miss rate decreases. However, at system level, this change will increase the stress on the bus and increase the amount of wasted energy due to unused traffic. Using cacheline utilization as a metric underscores the need for system-level research when changing characteristics of the cache hierarchy

    Pervasive Data Access in Wireless and Mobile Computing Environments

    Get PDF
    The rapid advance of wireless and portable computing technology has brought a lot of research interests and momentum to the area of mobile computing. One of the research focus is on pervasive data access. with wireless connections, users can access information at any place at any time. However, various constraints such as limited client capability, limited bandwidth, weak connectivity, and client mobility impose many challenging technical issues. In the past years, tremendous research efforts have been put forth to address the issues related to pervasive data access. A number of interesting research results were reported in the literature. This survey paper reviews important works in two important dimensions of pervasive data access: data broadcast and client caching. In addition, data access techniques aiming at various application requirements (such as time, location, semantics and reliability) are covered
    corecore