121 research outputs found

    Algoritme penggantian cache proxy terdistribusi untuk meningkatkan kinerja server web

    Get PDF
    The performance of web processing needs to increase to meet the growth of internet usage, one of which is by using cache on the web proxy server. This study examines the implementation of the proxy cache replacement algorithm to increase cache hits in the proxy server. The study was conducted by creating a clustered or distributed web server system using eight web server nodes. The system was able to provide increased latency by 90 % better and increased throughput of 5.33 times better.Kinerja pemrosesan web perlu meningkat untuk memenuhi pertumbuhan penggunaan internet, salah satunya dengan menggunakan cache pada server proxy web. Penelitian ini mengkaji implementasi algoritme penggantian cache proxy untuk meningkatkan cache hit dalam server proxy. Penelitian dilakukan dengan membuat sistem web server secara cluster atau terdistribusi dengan menggunakan delapan buah node web server. Sistem menghasilkan peningkatan latensi sebesar 90 % lebih baik dan peningkatan throughput sebesar 5,33 kali lebih baik

    CACHE MANAGEMENT SCHEMES FOR USER EQUIPMENT CONTEXTS IN 5TH GENERATION CLOUD RADIO ACCESS NETWORKS

    Get PDF
    Advances in cellular network technology continue to develop to address increasing demands from the growing number of devices resulting from the Internet of Things, or IoT. IoT has brought forth countless new equipment competing for service on cellular networks. The latest in cellular technology is 5th Generation Cloud Radio Access Networks, or 5G C-RAN, which consists of an architectural design created specifically to meet novel and necessary requirements for better performance, reduced latency of service, and scalability. As part of this design is the inclusion of a virtual cache, there is a necessity for useful cache management schemes and protocols, which ultimately will provide users better performance on the cellular network. This paper explores a few different cache management schemes, and analyzes their performance in comparison to each other. They include a probability based scoring scheme for cache elements; a hierarchical, or tiered, approach aimed at separating the cache into different levels or sections; and enhancements to previously existing approaches including reverse random marking as well as a scheme based on an exponential decay model. These schemes aim to offer better hit ratios, reduced latency of request service, preferential treatment based on users’ service levels and mobility, and a reduction in network traffic compared to other traditional and classic caching mechanisms

    On the design of efficient caching systems

    Get PDF
    Content distribution is currently the prevalent Internet use case, accounting for the majority of global Internet traffic and growing exponentially. There is general consensus that the most effective method to deal with the large amount of content demand is through the deployment of massively distributed caching infrastructures as the means to localise content delivery traffic. Solutions based on caching have been already widely deployed through Content Delivery Networks. Ubiquitous caching is also a fundamental aspect of the emerging Information-Centric Networking paradigm which aims to rethink the current Internet architecture for long term evolution. Distributed content caching systems are expected to grow substantially in the future, in terms of both footprint and traffic carried and, as such, will become substantially more complex and costly. This thesis addresses the problem of designing scalable and cost-effective distributed caching systems that will be able to efficiently support the expected massive growth of content traffic and makes three distinct contributions. First, it produces an extensive theoretical characterisation of sharding, which is a widely used technique to allocate data items to resources of a distributed system according to a hash function. Based on the findings unveiled by this analysis, two systems are designed contributing to the abovementioned objective. The first is a framework and related algorithms for enabling efficient load-balanced content caching. This solution provides qualitative advantages over previously proposed solutions, such as ease of modelling and availability of knobs to fine-tune performance, as well as quantitative advantages, such as 2x increase in cache hit ratio and 19-33% reduction in load imbalance while maintaining comparable latency to other approaches. The second is the design and implementation of a caching node enabling 20 Gbps speeds based on inexpensive commodity hardware. We believe these contributions advance significantly the state of the art in distributed caching systems

    A Web Cache Replacement Strategy for Safety-Critical Systems

    Get PDF
    A Safety-Critical System (SCS), such as a spacecraft, is usually a complex system. It produces a large amount of test data during a comprehensive testing process. The large amount of data is often managed by a comprehensive test data query system. The primary factor affecting the management experience of a comprehensive test data query system is the performance of querying the test data. It is a big challenge to manage and maintain the huge and complex testing data.To address this challenge, a web cache replacement algorithm which can effectively improve the query performance and reduce the network latency is needed. However, a general-purpose web cache replacement algorithm usually cannot be directly applied to this type of system due to the low hit rate and low byte hit rate. In order to improve the hit rate and byte hit rate, a data stream mining technology is introduced, and a new web cache algorithm GDSF-DST (Greedy Dual-Size Frequency with Data Stream Technology) for the Safety-Critical System (SCS) is proposed based on the original GDSF algorithm. The experimental results show that compared with state of the art traditional algorithms, GDSF-DST achieves competitive performance and improves the hit rate and byte hit rate by about 20%

    The structure-sensitivity of memory access: evidence from Mandarin Chinese

    Get PDF
    The present study examined the processing of the Mandarin Chinese long-distance reflexive ziji to evaluate the role that syntactic structure plays in the memory retrieval operations that support sentence comprehension. Using the multiple-response speed-accuracy tradeoff (MR-SAT) paradigm, we measured the speed with which comprehenders retrieve an antecedent for ziji. Our experimental materials contrasted sentences where ziji's antecedent was in the local clause with sentences where ziji's antecedent was in a distant clause. Time course results from MR-SAT suggest that ziji dependencies with syntactically distant antecedents are slower to process than syntactically local dependencies. To aid in interpreting the SAT data, we present a formal model of the antecedent retrieval process, and derive quantitative predictions about the time course of antecedent retrieval. The modeling results support the Local Search hypothesis: during syntactic retrieval, comprehenders initially limit memory search to the local syntactic domain. We argue that Local Search hypothesis has important implications for theories of locality effects in sentence comprehension. In particular, our results suggest that not all locality effects may be reduced to the effects of temporal decay and retrieval interference
    • …
    corecore