12 research outputs found

    An Optimal Trade-off between Content Freshness and Refresh Cost

    Full text link
    Caching is an effective mechanism for reducing bandwidth usage and alleviating server load. However, the use of caching entails a compromise between content freshness and refresh cost. An excessive refresh allows a high degree of content freshness at a greater cost of system resource. Conversely, a deficient refresh inhibits content freshness but saves the cost of resource usages. To address the freshness-cost problem, we formulate the refresh scheduling problem with a generic cost model and use this cost model to determine an optimal refresh frequency that gives the best tradeoff between refresh cost and content freshness. We prove the existence and uniqueness of an optimal refresh frequency under the assumptions that the arrival of content update is Poisson and the age-related cost monotonically increases with decreasing freshness. In addition, we provide an analytic comparison of system performance under fixed refresh scheduling and random refresh scheduling, showing that with the same average refresh frequency two refresh schedulings are mathematically equivalent in terms of the long-run average cost

    Speculative Validation of Web Objects for Further Reducing the User-Perceived Latency

    Full text link

    Temporal update dynamics under blind sampling

    Full text link
    Abstract—Network applications commonly maintain local copies of remote data sources in order to provide caching, indexing, and data-mining services to their clients. Modeling performance of these systems and predicting future updates usually requires knowledge of the inter-update distribution at the source, which can only be estimated through blind sampling – periodic downloads and comparison against previous copies. In this paper, we first introduce a stochastic modeling framework for this problem, where the update and sampling processes are both renewal. We then show that all previous approaches are biased unless the observation rate tends to infinity or the update process is Poisson. To overcome these issues, we propose four new algorithms that achieve various levels of consistency, which depend on the amount of temporal information revealed by the source and capabilities of the download process. I

    Optimization and Evaluation of Service Speed and Reliability in Modern Caching Applications

    Get PDF
    The performance of caching systems in general, and Internet caches in particular, is evaluated by means of the user-perceived service speed, reliability of downloaded content, and system scalability. In this dissertation, we focus on optimizing the speed of service, as well as on evaluating the reliability and quality of data sent to users. In order to optimize the service speed, we seek optimal replacement policies in the first part of the dissertation, as it is well known that download delays are a direct product of document availability at the cache; in demand-driven caches, the cache content is completely determined by the cache replacement policy. In the literature, many ad-hoc policies that utilize document sizes, retrieval latency, probability of references, and temporal locality of requests, have been proposed. However, the problem of finding optimal policies under these factors has not been pursued in any systematic manner. Here, we take a step in that direction: Still under the Independent Reference Model, we show that a simple Markov stationary policy minimizes the long-run average metric induced by non-uniform documents under optional cache replacement. We then use this result to propose a framework for operating caches under multiple performance metrics, by solving a constrained caching problem with a single constraint. The second part of the dissertation is devoted to studying data reliability and cache consistency issues: A cache object is termed consistent if it is identical to the master document at the origin server, at the time it is served to users. Cached objects become stale after the master is modified, and stale copies remain served to users until the cache is refreshed, subject to network transmit delays. However, the performance of Internet consistency algorithms is evaluated through the cache hit rate and network traffic load that do not inform on data staleness. To remedy this, we formalize a framework and the novel hit* rate measure, which captures consistent downloads from the cache. To demonstrate this new methodology, we calculate the hit and hit* rates produced by two TTL algorithms, under zero and non-zero delays, and evaluate the hit and hit* rates in applications

    Cacheability study for web content delivery

    Get PDF
    Master'sMASTER OF SCIENC

    Distributed Synchronization Under Data Churn

    Get PDF
    Nowadays an increasing number of applications need to maintain local copies of remote data sources to provide services to their users. Because of the dynamic nature of the sources, an application has to synchronize its copies with remote sources constantly to provide reliable services. Instead of push-based synchronization, we focus on pull-based strategy because it doesn’t require source cooperation and has been widely adopted by existing systems. The scalability of the pull-based synchronization comes at the expense of increased inconsistency of the copied content. We model this system under non-Poisson update/refresh processes and obtain sample-path averages of various metrics of staleness cost, generalizing previous results and studying its statistical properties. Computing staleness requires knowledge of the inter-update distribution at the source, which can only be estimated through blind sampling – periodic downloads and comparison against previous copies. We show that all previous approaches are biased unless the observation rate tends to infinity or the update process is Poisson. To overcome these issues, we propose four new algorithms that achieve various levels of consistency, which depend on the amount of temporal information revealed by the source and capabilities of the download process. Then we focus on applying freshness to P2P replication systems. We extend our results to several more difficult algorithms – cascaded replication, cooperative caching, and redundant querying from the clients. Surprisingly, we discover that optimal cooperation involves just a single peer and that redundant querying can hurt the ability of the system to handle load (i.e., may lead to lower scalability)

    Neural network methods in analysing and modelling time varying processes

    Get PDF
    Statistical data analysis is applied in many fields in order to gain understanding to the complex behaviour of the system or process under interest. For this goal, observations are collected from the process, and models are built in an effort to capture the essential structure from the observed data. In many applications, e.g. process control and pattern recognition, the modeled process is time-dependent, and thus modeling the temporal context is essential. In this thesis, neural network methods in statistical data analysis and especially in temporal sequence processing (TSP) are considered. Neural networks are a class of statistical models, applicable in many tasks from data exploration to regression and classification. Neural networks suitable for TSP can model time dependent phenomena, typically by utilizing delay lines or recurrent connections within the network. Recurrent Self-Organizing Map (RSOM) is an unsupervised neural network model capable of processing pattern sequences. The application of the RSOM with local models in temporal sequence prediction is presented. The RSOM is applied to divide the input pattern sequences into clusters, and local models are estimated corresponding to these clusters. In case studies, time series prediction problems are considered. Prediction results gained from the RSOM model show better performance than the model with conventional Self-Organizing Map. The RSOM can capture temporal context from the pattern sequence, which is useful in the presented prediction tasks. As another application, a neural network model for optimizing a Web cache is proposed. Web caches store recently requested Web objects, and are typically shared by many clients. A caching policy decides which objects are removed when the storage space is full. In the proposed approach a model predicts the value of each cache object by utilizing features extracted from the object. Only syntactic features are used, which enables efficient estimation and application of the model. The caching policy can be optimized based on the predicted values and a cost model designed according to the objectives of the caching. In a case study, different stages and decisions made during the data analysis and model building are presented. The results gained suggest that the proposed approach is useful in the application.reviewe

    Diseño centrado en calidad para la difusión Peer-to-Peer de video en vivo

    Get PDF
    El uso de redes Peer-to-Peer (P2P) es una forma escalable para ofrecer servicios de video sobre Internet. Este documento hace foco en la definición, desarrollo y evaluación de una arquitectura P2P para distribuir video en vivo. El diseño global de la red es guiado por la calidad de experiencia (Quality of Experience - QoE), cuyo principal componente en este caso es la calidad del video percibida por los usuarios finales, en lugar del tradicional diseño basado en la calidad de servicio (Quality of Service - QoE) de la mayoría de los sistemas. Para medir la calidad percibida del video, en tiempo real y automáticamente, extendimos la recientemente propuesta metodología Pseudo-Subjective Quality Assessment (PSQA). Dos grandes líneas de investigación son desarrolladas. Primero, proponemos una técnica de distribución de video desde múltiples fuentes con las características de poder ser optimizada para maximizar la calidad percibida en contextos de muchas fallas y de poseer muy baja señalización (a diferencia de los sistemas existentes). Desarrollamos una metodología, basada en PSQA, que nos permite un control fino sobre la forma en que la señal de video es dividida en partes y la cantidad de redundancia agregada, como una función de la dinámica de los usuarios de la red. De esta forma es posible mejorar la robustez del sistema tanto como sea deseado, contemplando el límite de capacidad en la comunicación. En segundo lugar, presentamos un mecanismo estructurado para controlar la topología de la red. La selección de que usuarios servirán a que otros es importante para la robustez de la red, especialmente cuando los usuarios son heterogéneos en sus capacidades y en sus tiempos de conexión.Nuestro diseño maximiza la calidad global esperada (evaluada usando PSQA), seleccionado una topología que mejora la robustez del sistema. Además estudiamos como extender la red con dos servicios complementarios: el video bajo demanda (Video on Demand - VoD) y el servicio MyTV. El desafío en estos servicios es como realizar búsquedas eficientes sobre la librería de videos, dado al alto dinamismo del contenido. Presentamos una estrategia de "caching" para las búsquedas en estos servicios, que maximiza el número total de respuestas correctas a las consultas, considerando una dinámica particular en los contenidos y restricciones de ancho de banda. Nuestro diseño global considera escenarios reales, donde los casos de prueba y los parámetros de configuración surgen de datos reales de un servicio de referencia en producción. Nuestro prototipo es completamente funcional, de uso gratuito, y basado en tecnologías bien probadas de código abierto
    corecore