29 research outputs found

    A performance model of speculative prefetching in distributed information systems

    Full text link
    Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes

    Effect of speculative prefetching on network load in distributed systems

    Full text link
    Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper on the other hand investigates the performance of speculative prefetching. When prefetching is performed speculatively, there is bound to be an increase in the network load. Furthermore, the prefetched items must compete for space with existing cache occupants. These two factors-increased load and eviction of potentially useful cache entries-are considered in the analysis. We obtain the following conclusion: to maximise the improvement in access time, prefetch exclusively all items with access probabilities exceeding a certain threshold.<br /

    Innovation capability and its role in enhancing the relationship between TQM practices and innovation performance

    Get PDF
    Innovation plays a critical role in predicting the long-term survival of organizations, determining an organization’s success and sustaining its global competitiveness, especially in an environment where technologies, competitive position and customer demands can change almost overnight, and where the life-cycle of products and services are becoming shorter.Therefore, the main purpose of this paper is to extend the existing knowledge as to the relationship between TQM practices and innovation performance by exploring the expected role of innovation capability as mediator to enhance this relationship.At the same time, this study attempted to shed a light on how to improve innovation performance of manufacturing companies in Malaysia.The gained results indicated that innovation capability mediates the relationship between TQM practices and innovation performance. More importantly, this study supports the findings of the past studies that questioned the role of TQM practices in improving innovation performance. Finally, in light of the obtained results, several recommendations were introduced to assist decision makers in manufacturing companies

    Overview of transient liquid phase and partial transient liquid phase bonding

    Full text link

    Investigation of a prefetch model for low bandwidth networks

    Full text link
    We investigate speculative prefetching under a model in which prefetching is neither aborted nor preempted by demand fetch but instead gets equal priority in network bandwidth utilisation. We argue that the non-abortive assumption is appropriate for wireless networks where bandwidth is low and latency is high, and the non-preemptive assumption is appropriate for Internet where prioritization is not always possible. This paper assumes the existence of an access model to provide some knowledge about future accesses and investigates analytically the performance of a prefetcher that utilises this knowledge. In mobile computing, because resources are severely constrained, performance prediction is as important as access prediction. For uniform retrieval time, we derive a theoretical limit of improvement in access time due to prefetching. This leads to the formulation of an optimal algorithrn for prefetching one access ahead. For non-uniform retrieval time, two different types of prefetching of multiple documents, namely mainline and branch prefetch, are evaluated against prefetch of single document. In mainline prefetch, the most probable sequence of future accesses is prefetched. In branch prefetch, a set of different alternatives for future accesses is prefetched. Under some conditions, mainline prefetch may give slight improvement in user-perceived access time over single prefetch with nominal extra retrieval cost, where retrieval cost is defined as the expected network time wasted in non-useful prefetch. Branch prefetch performs better than mainline prefetch but incurs more retrieval cost.<br /

    Performance modelling of speculative prefetching for compound requests in low bandwidth networks

    Full text link
    To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fact that a web page is a compound. By this term we mean that a user request for a single web page may require the retrieval of several multimedia items. Our prediction algorithm builds an access graph that captures the dynamics of web navigation rather than merely attaching probabilities to hypertext structure. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. The paper takes a different approach. Specifically, it models the performance of the prefetcher and develops a prefetch policy based on a theoretical analysis of the model. In the analysis, we derive a formula for the expected improvement in access time when prefetch is performed in anticipation for a compound request. We then develop an algorithm that integrates prefetch and cache replacement decisions so as to maximize this improvement. We present experimental results to demonstrate the effectiveness of compound-based prefetching in low bandwidth networks

    Resource-aware speculative prefetching in wireless networks

    Full text link
    Mobile users connected to wireless networks expect performance comparable to those on wired networks for interactive multimedia applications. Satisfying Quality of Service (QoS) requirements for such applications in wireless networks is a challenging problem due to limitations of low bandwidth, high error rate and frequent disconnections of wireless channels. In addition, wireless networks suffer from varying bandwidth. In this paper we investigate object prefetching during times of connectedness and bandwidth availability to enhance user perceived connectedness. This paper presents an access model that is suitable for multimedia access in wireless networks. Access modelling for the purpose of predicting future accesses in the context of speculative prefetching has received much attention in the literature. The model recognizes that a web page, instead of just a single file, is typically a compound of several files. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. This paper takes a different approach. Specifically, it models the performance of the prefetcher, taking into account access predictions and resource parameters, and develops a prefetch policy based on a theoretical analysis of the model. Since the analysis considers cache as one of the resource parameters, the resulting policy integrates prefetch and cache replacement decisions. The paper investigates the effect of prefetching on network load. In order to make effective use of available resources and maximize access improvement, it is beneficial to prefetch all items with access probabilities exceeding certain threshold
    corecore