10,862 research outputs found
Client-Side Page Element Web-Caching
When a user explores different web pages in a given website, the website typically sends the entire requested page even if only a portion of the page was different to the current page. That is, two pages on a given website might share elements on the page like search bar, left bar, navigation controls, advertisements, etc., but this information is retransmitted. Most of the users spent their time on the front-end while downloading all the components in the page. Nowadays, server-side caching of page elements is often done using tools like memcached. The aim of my project is to explore element web page caching on the client-side. That is, our goal is to develop a system that caches the most common html parts of web pages in the website and reuses them in the further web pages reducing the transmission data. This effect probably is currently attainable using frames or object tags; however, the actual UI meaning of these tags is different than one integrated HTML file and so could cause usability issues, therefore, we want to explore solutions which are transparent to the end user -- the solution must behave just like a single fixed web page. In order to explore the advantage of having client-side caching and determine the effect on the response time, we made our server set-up as realistic as possible. So Squid, a front-end load balance, was used when we tested our client-side caching
Evaluating the impact of caching on the energy consumption and performance of progressive web apps
Context. Since today mobile devices have limited battery life, the energy consumption of the software running on them can play a strong role with respect to the success of mobile-based businesses. Progressive Web Applications (PWAs) are built using common web technologies like HTML, CSS, and JavaScript and are commonly used for providing a better user experience to mobile users. Caching is the main technique used by PWA developers for optimizing network usage and for providing a meaningful experience even when the user's device is offline. Goal. This paper aims at assessing the impact of caching on both the energy consumption and performance of PWAs. Method. We conducted an empirical experiment targeting 9 real PWAs developed by third-party developers. The experiment is designed as a 1 factor-2 treatments study, with the usage of caching as the single factor and the status of the cache as treatments (empty vs populated cache). The response variables of the experiment are (i) the energy consumption of the mobile device and (ii) the page load time of the PWAs. The experiment is executed on a real Android device running the Mozilla Firefox browser. Results. Our results show that PWAs do not consume significantly different amounts of energy when loaded either with an empty or populated cache. However, the page load time of PWAs is significantly lower when the cache is already populated, with a medium effect size. Conclusions. This study confirms that PWAs are promising in terms of energy consumption and provides evidence that caching can be safely exploited by PWA developers concerned with energy consumption. The study provides also empirical evidence that caching is an effective technique for improving the user experience in terms of page loading time of PWAs
DOH: A Content Delivery Peer-to-Peer Network
Many SMEs and non-pro¯t organizations su®er when their Web
servers become unavailable due to °ash crowd e®ects when their web site
becomes popular. One of the solutions to the °ash-crowd problem is to place
the web site on a scalable CDN (Content Delivery Network) that replicates
the content and distributes the load in order to improve its response time.
In this paper, we present our approach to building a scalable Web Hosting
environment as a CDN on top of a structured peer-to-peer system of collaborative
web-servers integrated to share the load and to improve the overall
system performance, scalability, availability and robustness. Unlike clusterbased
solutions, it can run on heterogeneous hardware, over geographically
dispersed areas. To validate and evaluate our approach, we have developed a
system prototype called DOH (DKS Organized Hosting) that is a CDN implemented
on top of the DKS (Distributed K-nary Search) structured P2P
system with DHT (Distributed Hash table) functionality [9]. The prototype
is implemented in Java, using the DKS middleware, the Jetty web-server, and
a modi¯ed JavaFTP server. The proposed design of CDN has been evaluated
by simulation and by evaluation experiments on the prototype
An Optimal Trade-off between Content Freshness and Refresh Cost
Caching is an effective mechanism for reducing bandwidth usage and
alleviating server load. However, the use of caching entails a compromise
between content freshness and refresh cost. An excessive refresh allows a high
degree of content freshness at a greater cost of system resource. Conversely, a
deficient refresh inhibits content freshness but saves the cost of resource
usages. To address the freshness-cost problem, we formulate the refresh
scheduling problem with a generic cost model and use this cost model to
determine an optimal refresh frequency that gives the best tradeoff between
refresh cost and content freshness. We prove the existence and uniqueness of an
optimal refresh frequency under the assumptions that the arrival of content
update is Poisson and the age-related cost monotonically increases with
decreasing freshness. In addition, we provide an analytic comparison of system
performance under fixed refresh scheduling and random refresh scheduling,
showing that with the same average refresh frequency two refresh schedulings
are mathematically equivalent in terms of the long-run average cost
WebWave: Globally Load Balanced Fully Distributed Caching of Hot Published Documents
Document publication service over such a large network as the Internet challenges us to harness available server and network resources to meet fast growing demand. In this paper, we show that large-scale dynamic caching can be employed to globally minimize server idle time, and hence maximize the aggregate server throughput of the whole service. To be efficient, scalable and robust, a successful caching mechanism must have three properties: (1) maximize the global throughput of the system, (2) find cache copies without recourse to a directory service, or to a discovery protocol, and (3) be completely distributed in the sense of operating only on the basis of local information.
In this paper, we develop a precise definition, which we call tree load-balance (TLB), of what it means for a mechanism to satisfy these three goals. We present an algorithm that computes TLB off-line, and a distributed protocol that induces a load distribution that converges quickly to a TLB one. Both algorithms place cache copies of immutable documents, on the routing tree that connects the cached document's home server to its clients, thus enabling requests to stumble on cache copies en route to the home server.Harvard University; The Saudi Cultural Mission to the U.S.A
Query Load Balancing by Caching Search Results in Peer-to-Peer Information Retrieval Networks
For peer-to-peer web search engines it is important to keep the delay between receiving a query and providing search results within an acceptable range for the end user. How to achieve this remains an open challenge. One way to reduce delays is by caching search results for queries and allowing peers to access each others cache. In this paper we explore the limitations of search result caching in large-scale peer-to-peer information retrieval networks by simulating such networks with increasing levels of realism. We find that cache hit ratios of at least thirty-three percent are attainable
- …