351 research outputs found

    Hit and Bandwidth Optimal Caching for Wireless Data Access Networks

    Get PDF
    For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment where updates are injected by both the server and the clients, distributed all over networks. In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system. Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks

    An integrated soft- and hard-programmable multithreaded architecture

    Get PDF

    Contributions to Time-bounded Problem Solving Using Knowledge-based Techniques

    Get PDF
    Time-bounded computations represent major challenge for knowledge-based techniques. Being primarily non-algorithmic in nature, such techniques suffer from obvious open-endedness in the sense that demands on time and other resources for a particular task cannot be predicted in advance. Consequently, efficiency of traditional knowledge-based techniques in solving time-bounded problems is not at all guaranteed. Artificial Intelligence researchers working in real-time problem solving have generally tried to avoid this difficulty by improving the speed of computation (through code optimisation or dedicated hardware) or using heuristics. However, most of these shortcuts are likely to be inappropriate or unsuitable in complicated real-time applications. Consequently, there is a need of more systematic and/or general measures. We propose a two-fold improvement over traditional knowledge-based techniques for tackling this problem. Firstly, that a cache-based architecture should be used in choosing the best alternative approach (when there are two or more) compatible to the time constraints. This cache differs from traditional caches, used in other branches of computer science, in the sense that it can hold not just "ready to use" values but also knowledge suggesting which AI technique will be most suitable to meet a temporal demand in a given context. The second improvement is in processing the cached knowledge itself. We propose a technique which can be called "knowledge interpolation" and which can be applied to different forms of knowledge (such as symbolic values, rules, cases) when the keys used for cache access do not make exact matches with the labels for any cell of the cache. The research reported in this thesis comprises development of cache-based architecture and interpolation techniques, studies of their requisites and representational issues and their complementary roles in achieving time-bounded performance. Ground operations control of an airport and allocating resources for short-wave radio communications are two domains in which our proposed methods are studied

    The effect of an optical network on-chip on the performance of chip multiprocessors

    Get PDF
    Optical networks on-chip (ONoC) have been proposed to reduce power consumption and increase bandwidth density in high performance chip multiprocessors (CMP), compared to electrical NoCs. However, as buffering in an ONoC is not viable, the end-to-end message path needs to be acquired in advance during which the message is buffered at the network ingress. This waiting latency is therefore a combination of path setup latency and contention and forms a significant part of the total message latency. Many proposed ONoCs, such as Single Writer, Multiple Reader (SWMR), avoid path setup latency at the expense of increased optical components. In contrast, this thesis investigates a simple circuit-switched ONoC with lower component count where nodes need to request a channel before transmission. To hide the path setup latency, a coherence-based message predictor is proposed, to setup circuits before message arrival. Firstly, the effect of latency and bandwidth on application performance is thoroughly investigated using full-system simulations of shared memory CMPs. It is shown that the latency of an ideal NoC affects the CMP performance more than the NoC bandwidth. Increasing the number of wavelengths per channel decreases the serialisation latency and improves the performance of both ONoC types. With 2 or more wavelengths modulating at 25 Gbit=s , the ONoCs will outperform a conventional electrical mesh (maximal speedup of 20%). The SWMR ONoC outperforms the circuit-switched ONoC. Next coherence-based prediction techniques are proposed to reduce the waiting latency. The ideal coherence-based predictor reduces the waiting latency by 42%. A more streamlined predictor (smaller than a L1 cache) reduces the waiting latency by 31%. Without prediction, the message latency in the circuit-switched ONoC is 11% larger than in the SWMR ONoC. Applying the realistic predictor reverses this: the message latency in the SWMR ONoC is now 18% larger than the predictive circuitswitched ONoC

    Mobile IP movement detection optimisations in 802.11 wireless LANs

    Get PDF
    The IEEE 802.11 standard was developed to support the establishment of highly flexible wireless local area networks (wireless LANs). However, when an 802.11 mobile node moves from a wireless LAN on one IP network to a wireless LAN on a different network, an IP layer handoff occurs. During the handoff, the mobile node's IP settings must be updated in order to re-establish its IP connectivity at the new point of attachment. The Mobile IP protocol allows a mobile node to perform an IP handoff without breaking its active upper-layer sessions. Unfortunately, these handoffs introduce large latencies into a mobile node's traffic, during which packets are lost. As a result, the mobile node's upper-layer sessions and applications suffer significant disruptions due to this handoff latency. One of the main components of a Mobile IP handoff is the movement detection process, whereby a mobile node senses that it is attached to a new IP network. This procedure contributes significantly to the total Mobile IP handover latency and resulting disruption. This study investigates different mechanisms that aim to lower movement detection delays and thereby improve Mobile IP performance. These mechanisms are considered specifically within the context of 802.11 wireless LANs. In general, a mobile node detects attachment to a new network when a periodic IP level broadcast (advertisement) is received from that network. It will be shown that the elimination of this dependence on periodic advertisements, and the reliance instead on external information from the 802.11 link layer, results in both faster and more efficient movement detection. Furthermore, a hybrid system is proposed that incorporates several techniques to ensure that movement detection performs reliably within a variety of different network configurations. An evaluation framework is designed and implemented that supports the assessment of a wide range of movement detection mechanisms. This test bed allows Mobile IP handoffs to be analysed in detail, with specific focus on the movement detection process. The performance of several movement detection optimisations is compared using handoff latency and packet loss as metrics. The evaluation framework also supports real-time Voice over IP (VoIP) traffic. This is used to ascertain the effects that different movement detection techniques have on the output voice quality. These evaluations not only provide a quantitative performance analysis of these movement detection mechanisms, but also a qualitative assessment based on a VoIP application
    corecore