33 research outputs found

    Real Benefits of Active Caching in the WWW

    Get PDF
    We present our research on active caching in the WWW. We drew some key conclusions on the service properties and its significance, according to the results of our analysis of the WWW content. These conclusions include the percentage values of the potentially outdated Web content in the overall Web content after an object`s lifetime has passed. This paper provides reasons for using active cachin

    An Obstacle Detection System Using Depth Information and Region Growing for Blind

    Get PDF
    [[abstract]]In order to make the visually impaired people get the information of obstacles effectively and avoid it successfully in the unfamiliar environment, we propose an obstacle detection method based on depth information. Firstly, we use the edge characteristics of depth image to segment the obstacle by different depth. Then we remove the unnecessary ground information by gradient threshold. Our algorithm can label the obstacles by region growing algorithm respectively. Finally, we use rectangular windows to box out these obstacles. Our algorithm can display distances between Kinect sensor and these centers of obstacles on the frame for accuracy. Experimental results show that the proposed method has greater robustness than others. And the average of processing speed is only 0.08 second per frame.[[conferencetype]]國際[[conferencedate]]20130618~20130620[[booktype]]紙本[[iscallforpapers]]Y[[conferencelocation]]Jeju Island, Kore

    Protocolos para mejorar la performance de las aplicaciones Web

    Get PDF
    El protocolo HTTP ha sido revisado en un par de ocasiones, la última, ya hace más de una década. Sin embargo, las características de la Web actual, los requerimientos de sus usuarios, y los niveles de masividad hicieron que los recursos que se proveen mediante éste hayan alcanzado un punto donde se han encontrado algunas limitaciones inherentes al diseño original del protocolo. Es por ello que el IETF (Internet Engineering Task Force), dentro del HTTPbis Working Group, está analizando modificaciones, ajustes y/o mejoras de fondo en pos de alcanzar un futuro estándar HTTP 2.0. Como una propuesta reciente se presentó el protocolo SPDY, cuyo objetivo primordial es mejorar el rendimiento del servicio Web. Hoy en día se ha convertido en la base sobre la cual dicho WG está trabajando. Este proyecto de investigación plantea estudiar las deficiencias mencionadas de HTTP y las mejoras disponibles, entre los cuales se encuentra SPDY. Además, producir un estudio académico de algunas de las condiciones que optimicen su implementación, tanto desde el punto de vista técnico y general como además involucrando un análisis de condiciones geográficas. Por último, se desarrollarán piezas de software que faciliten dichos análisis y sirvan como herramientas de apoyo para una eventual migración a SPDY o a HTTP/2.0.Eje: Arquitectura, Redes y Sistemas OperativosRed de Universidades con Carreras en Informática (RedUNCI

    Challenges and the Solutions for Multimedia Metadata Sharing in Networks

    Get PDF

    Rough Set Granularity in Mobile Web Pre-Caching

    Get PDF
    Mobile Web pre-caching (Web prefetching and caching) is an explication of performance enhancement and storage limitation ofmobile devices

    Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency

    Full text link
    With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988

    A taxonomy of web prediction algorithms

    Full text link
    Web prefetching techniques are an attractive solution to reduce the user-perceived latency. These techniques are driven by a prediction engine or algorithm that guesses following actions of web users. A large amount of prediction algorithms has been proposed since the first prefetching approach was published, although it is only over the last two or three years when they have begun to be successfully implemented in commercial products. These algorithms can be implemented in any element of the web architecture and can use a wide variety of information as input. This affects their structure, data system, computational resources and accuracy. The knowledge of the input information and the understanding of how it can be handled to make predictions can help to improve the design of current prediction engines, and consequently prefetching techniques. This paper analyzes fifty of the most relevant algorithms proposed along 15 years of prefetching research and proposes a taxonomy where the algorithms are classified according to the input data they use. For each group, the main advantages and shortcomings are highlighted. © 2012 Elsevier Ltd. All rights reserved.This work has been partially supported by Spanish Ministry of Science and Innovation under Grant TIN2009-08201, Generalitat Valenciana under Grant GV/2011/002 and Universitat Politecnica de Valencia under Grant PAID-06-10/2424.Domenech, J.; De La Ossa Perez, BA.; Sahuquillo Borrás, J.; Gil Salinas, JA.; Pont Sanjuan, A. (2012). A taxonomy of web prediction algorithms. Expert Systems with Applications. 39(9):8496-8502. https://doi.org/10.1016/j.eswa.2012.01.140S8496850239
    corecore