988 research outputs found

    Leveraging Program Analysis to Reduce User-Perceived Latency in Mobile Applications

    Full text link
    Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel client-centric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address "what" and "when" to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applicationsComment: ICSE 201

    Architecture for Cooperative Prefetching in P2P Video-on- Demand System

    Full text link
    Most P2P VoD schemes focused on service architectures and overlays optimization without considering segments rarity and the performance of prefetching strategies. As a result, they cannot better support VCRoriented service in heterogeneous environment having clients using free VCR controls. Despite the remarkable popularity in VoD systems, there exist no prior work that studies the performance gap between different prefetching strategies. In this paper, we analyze and understand the performance of different prefetching strategies. Our analytical characterization brings us not only a better understanding of several fundamental tradeoffs in prefetching strategies, but also important insights on the design of P2P VoD system. On the basis of this analysis, we finally proposed a cooperative prefetching strategy called "cooching". In this strategy, the requested segments in VCR interactivities are prefetched into session beforehand using the information collected through gossips. We evaluate our strategy through extensive simulations. The results indicate that the proposed strategy outperforms the existing prefetching mechanisms.Comment: 13 Pages, IJCN

    Evaluation, Analysis and adaptation of web prefetching techniques in current web

    Full text link
    Abstract This dissertation is focused on the study of the prefetching technique applied to the World Wide Web. This technique lies in processing (e.g., downloading) a Web request before the user actually makes it. By doing so, the waiting time perceived by the user can be reduced, which is the main goal of the Web prefetching techniques. The study of the state of the art about Web prefetching showed the heterogeneity that exists in its performance evaluation. This heterogeneity is mainly focused on four issues: i) there was no open framework to simulate and evaluate the already proposed prefetching techniques; ii) no uniform selection of the performance indexes to be maximized, or even their definition; iii) no comparative studies of prediction algorithms taking into account the costs and benefits of web prefetching at the same time; and iv) the evaluation of techniques under very different or few significant workloads. During the research work, we have contributed to homogenizing the evaluation of prefetching performance by developing an open simulation framework that reproduces in detail all the aspects that impact on prefetching performance. In addition, prefetching performance metrics have been analyzed in order to clarify their definition and detect the most meaningful from the user's point of view. We also proposed an evaluation methodology to consider the cost and the benefit of prefetching at the same time. Finally, the importance of using current workloads to evaluate prefetching techniques has been highlighted; otherwise wrong conclusions could be achieved. The potential benefits of each web prefetching architecture were analyzed, finding that collaborative predictors could reduce almost all the latency perceived by users. The first step to develop a collaborative predictor is to make predictions at the server, so this thesis is focused on an architecture with a server-located predictor. The environment conditions that can be found in the web are alsDoménech I De Soria, J. (2007). Evaluation, Analysis and adaptation of web prefetching techniques in current web [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1841Palanci

    Characterizing Deep-Learning I/O Workloads in TensorFlow

    Full text link
    The performance of Deep-Learning (DL) computing frameworks rely on the performance of data ingestion and checkpointing. In fact, during the training, a considerable high number of relatively small files are first loaded and pre-processed on CPUs and then moved to accelerator for computation. In addition, checkpointing and restart operations are carried out to allow DL computing frameworks to restart quickly from a checkpoint. Because of this, I/O affects the performance of DL applications. In this work, we characterize the I/O performance and scaling of TensorFlow, an open-source programming framework developed by Google and specifically designed for solving DL problems. To measure TensorFlow I/O performance, we first design a micro-benchmark to measure TensorFlow reads, and then use a TensorFlow mini-application based on AlexNet to measure the performance cost of I/O and checkpointing in TensorFlow. To improve the checkpointing performance, we design and implement a burst buffer. We find that increasing the number of threads increases TensorFlow bandwidth by a maximum of 2.3x and 7.8x on our benchmark environments. The use of the tensorFlow prefetcher results in a complete overlap of computation on accelerator and input pipeline on CPU eliminating the effective cost of I/O on the overall performance. The use of a burst buffer to checkpoint to a fast small capacity storage and copy asynchronously the checkpoints to a slower large capacity storage resulted in a performance improvement of 2.6x with respect to checkpointing directly to slower storage on our benchmark environment.Comment: Accepted for publication at pdsw-DISCS 201

    Key factors in web latency savings in an experimental prefetching system

    Full text link
    Although Internet service providers and communications companies are continuously offering higher and higher bandwidths, users still complain about the high latency they perceive when downloading pages from the web. Therefore, latency can be considered as the main web performance metric from the user's point of view. Many studies have demonstrated that web prefetching can be an interesting technique to reduce such latency at the expense of slightly increasing the network traffic. In this context, this paper presents an empirical study to investigate the maximum benefits that web users can expect from prefetching techniques in the current web. Unlike previous theoretical studies, this work considers a realistic prefetching architecture using real traces. In this way, the influence of real imple- mentation constraints are considered and analyzed. The results obtained show that web prefetching could improve page latency up to 52% in the studied traces. ©Springer Science+Business Media, LLC 2011De La Ossa Perez, BA.; Sahuquillo Borrás, J.; Pont Sanjuan, A.; Gil Salinas, JA. (2012). Key factors in web latency savings in an experimental prefetching system. Journal of Intelligent Information Systems. 39(1):187-207. doi:10.1007/s10844-011-0188-xS187207391Balamash, A., Krunz, M., & Nain, P. (2007). Performance analysis of a client-side caching/prefetching system for web traffic. Computer Networks, 51(13), 3673–3692.Bestavros, A. (1995). Using speculation to reduce server load and service time on the www. In Proc. of the 4th ACM international conference on information and knowledge management. Baltimore, USA.Bestavros, A., & Cunha, C. (1996). Server-initiated document dissemination for the WWW. In IEEE data engineering bulletin. [Online]. Available: http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.128.266 . Accessed 29 November 2011.Bouras, C., Konidaris, A., & Kostoulas, D. (2004). Predictive prefetching on the web and its potential impact in the wide area. In World Wide Web: Internet and web information systems (Vol. 7, No. 2, pp. 143–179). The Netherlands: Kluwer Academic.Changa, T., Zhuangb, Z., Velayuthamc, A., & Sivakumara, R. (2008). WebAccel: Accelerating web access for low-bandwidth hosts. Computer Networks, 52(11), 2129–2147.Davison, B. D. (2002). The design and evaluation of web prefetching and caching techniques. Ph.D. dissertation, Rutgers University.de la Ossa, B., Gil, J. A., Sahuquillo, J., & Pont, A. (2007). Delfos: The oracle to predict next web user’s accesses. In Proc. of the IEEE 21st international conference on advanced information networking and applications. Niagara Falls, Canada.de la Ossa, B., Pont, A., Sahuquillo, J., & Gil, J. A. (2010). Referrer graph: A low-cost web prediction algorithm. In Proc. of the 25th ACM symposium on applied computing (pp. 831–838). doi: 10.1145/1774088.1774260 .de la Ossa, B., Sahuquillo, J., Pont, A., & Gil, J. A. (2009). An empirical study on maximum latency saving in web prefetching. In Proc. of the 2009 IEEE/WIC/ACM international conference on web intelligence (WI’09).Dom̀enech, J., Gil, J. A., Sahuquillo, J., & Pont, A. (2006a). DDG: An efficient prefetching algorithm for current web generation. In Proc. of the 1st IEEE workshop on hot topics in web systems and technologies (HotWeb). Boston, USA.Domènech, J., Gil, J. A., Sahuquillo, J., & Pont, A. (2006b). Web prefetching performance metrics: A survey. Performance Evaluation, 63(9–10), 988–1004.Domènech, J., Sahuquillo, J., Gil, J. A., & Pont, A. (2006c). The impact of the web prefetching architecture on the limits of reducing user’s perceived latency. In Proc. of the international conference on web intelligence. Piscataway: IEEE.de la Ossa, B., Gil, J. A., Sahuquillo, J., & Pont, A. (2007). Improving web prefetching by making predictions at prefetch. In Proc. of the 3rd EURO-NGI conference on next generation internet networks design and engineering for heterogeneity (NGI’07) (pp. 21–27).Duchamp, D. (1999). Prefetching hyperlinks. In Proc. of the 2nd USENIX symposium on internet technologies and systems. Boulder, USA.Fan, L., Cao, P., Lin, W., & Jacobson, Q. (1999). Web prefetching between low-bandwidth clients and proxies: Potential and performance. In Proc. of the ACM SIGMETRICS conference on measurement and modeling of computer systems (pp. 178–187).HTTP/1.1. [Online]. Available: http://www.faqs.org/rfcs/rfc2616.html . Accessed 29 November 2011.Kroeger, T. M., Long, D., & Mogul, J. C. (1997). Exploring the bounds of web latency reduction from caching and prefetching. In Proc. of the 1st USENIX symposium on internet technologies and systems. Monterrey, USA.Link prefetching in mozilla faq (2011). [Online]. Available: https://developer.mozilla.org/en/Link_prefetching_FAQ .Markatos, E., & Chronaki, C. (1998). A top-10 approach to prefetching on the web. In Proc. of INET. Geneva, Switzerland.Márquez, J., Domènech, J., Pont, A., & Gil, J. A. (2008). Exploring the benefits of caching and prefetching in the mobile web. In Second IFIP symposium on wireless communications and information technology for developing countries (WCITD 2008).Padmanabhan, V., & Mogul, J. C. (1996). Using predictive prefetching to improve World Wide Web latency. In Proc. of the ACM SIGCOMM conference. Stanford University, USA.Palpanas, T., & Mendelzon, A. (1999). Web prefetching using partial match prediction. In Proc. of the 4th international web caching workshop. San Diego, USA.Schechter, S., Krishnan, M., & Smith, M. D. (1998). Using path profiles to predict http requests. In Proc. of the 7th international World Wide Web conference. Brisbane, Australia.Teng, W., Chang, C., & Chen, M. (2005). Integrating web caching and web prefetching in client-side proxies. IEEE Transactions on Parallel and Distributed Systems, 16(5), 444–455
    • …
    corecore