15 research outputs found

    JPIP proxy server with prefetching strategies based on user-navigation model and semantic map

    Get PDF
    The efficient transmission of large resolution images and, in particular, the interactive transmission of images in a client-server scenario, is an important aspect for many applications. Among the current image compression standards, JPEG2000 excels for its interactive transmission capabilities. In general, three mechanisms are employed to optimize the transmission of images when using the JPEG2000 Interactive Protocol (JPIP): 1) packet re-sequencing at the server; 2) prefetching at the client; and 3) proxy servers along the network infrastructure. To avoid the congestion of the network, prefetching mechanisms are not commonly employed when many clients within a local area network (LAN) browse images from a remote server. Aimed to maximize the responsiveness of all the clients within a LAN, this work proposes the use of prefetching strategies at the proxy server -rather than at the clients. The main insight behind the proposed prefetching strategies is a user-navigation model and a semantic map that predict the future requests of the clients. Experimental results indicate that the introduction of these strategies into a JPIP proxy server enhances the browsing experience of the end-users notably

    Aplicación de técnicas de aprendizaje automático a la gestión y optimización de cachés de teselas para la aceleración de servicios de mapas en las infraestructuras de datos espaciales

    Get PDF
    La gran proliferación en el uso de servicios de mapas a través de la Web ha motivado la necesidad de disponer de servicios cada vez más escalables. Como respuesta a esta necesidad, los servicios de mapas basados en teselado se han perfilado como una alternativa escalable frente a los servicios de mapas tradicionales, permitiendo la actuación de mecanismos de caché o incluso la prestación del servicio mediante una colección de imágenes pregeneradas. Sin embargo, los requisitos de almacenamiento y tiempo de puesta en marcha de estos servicios resultan a menudo prohibitivos cuando la cartografía a servir cubre una zona geográfica extensa para un elevado número de escalas. Por ello, habitualmente estos servicios se ofrecen recurriendo a cachés parciales que contienen tan solo un subconjunto de la cartografía. Para garantizar una Calidad de Servicio (QoS - Quality of Service) aceptable es necesaria la actuación de adecuadas políticas de mantenimiento y gestión de estas cachés de mapas: 1) Estrategias de población inicial ó seeding de la caché. 2) Algoritmos de carga dinámica ante las peticiones de los usuarios. 3) Políticas de reemplazo de caché. Sin embargo, existe un reducido número de estas estrategias que sean específicas para los servicios de mapas. La mayor parte de estrategias aplicadas a estos servicios son extraídas de otros ámbitos, como los proxies Web tradicionales, las cuáles no tienen en cuenta la componente espacial de los objetos de mapa que gestionan. En la presente tesis se aborda este punto de mejora, diseñando nuevos algoritmos específicos para este dominio de aplicación que permitan optimizar el rendimiento de los servicios de mapas. Dado el elevado número de objetos gestionados por estas cachés y la heterogeneidad de los mismos en cuanto a capas, escalas de representación, etc., se ha hecho un esfuerzo para que las estrategias diseñadas sean automáticas o semi-automáticas, requiriendo la menor intervención humana posible. Así, se han propuesto dos novedosas estrategias para la población inicial de una caché de mapas. Una de ellas utiliza un modelo descriptivo mediante los registros de peticiones pasadas del servicio. La otra se basa en un modelo predictivo para la identificación de fenómenos geográficos directores de las peticiones de los usuarios, parametrizado o bien mediante un análisis regresivo OLS (Ordinary Least Squares) o mediante un sistema inteligente con redes neuronales. Asimismo, se han llevado a cabo importantes contribuciones en relación con las estrategias de reemplazo de estas cachés. Por una parte, se ha propuesto un sistema inteligente basado en redes neuronales, que estima la popularidad de acceso futuro en base a ciertas propiedades de los objetos que gestiona: actualidad de referencia, frecuencia de referencia, y el tamaño de la tesela referenciada. Por otra parte, se ha propuesto una estrategia, bautizada como Spatial-LFU, la cual es una variante de la estrategia Perfect-LFU, simplificada aprovechando la correlación espacial existente entre las peticiones.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemátic

    Optimizing Sparse Linear Algebra on Reconfigurable Architecture

    Full text link
    The rise of cloud computing and deep machine learning in recent years have led to a tremendous growth of workloads that are not only large, but also have highly sparse representations. A large fraction of machine learning problems are formulated as sparse linear algebra problems in which the entries in the matrices are mostly zeros. Not surprisingly, optimizing linear algebra algorithms to take advantage of this sparseness is critical for efficient computation on these large datasets. This thesis presents a detailed analysis of several approaches to sparse matrix-matrix multiplication, a core computation of linear algebra kernels. While the arithmetic count of operations for the nonzero elements of the matrices are the same regardless of the algorithm used to perform matrix-matrix multiplication, there is significant variation in the overhead of navigating the sparse data structures to match the nonzero elements with the correct indices. This work explores approaches to minimize these overheads as well as the number of memory accesses for sparse matrices. To provide concrete numbers, the thesis examines inner product, outer product and row-wise algorithms on Transmuter, a flexible accelerator that can reconfigure its cache and crossbars at runtime to meet the demands of the program. This thesis shows how the reconfigurability of Transmuter can improve complex workloads that contain multiple kernels with varying compute and memory requirements, such as the Graphsage deep neural network and the Sinkhorn algorithm for optimal transport distance. Finally, we examine a novel Transmuter feature: register-to-register queues for efficient communication between its processing elements, enabling systolic array style computation for signal processing algorithms.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169877/1/dohypark_1.pd

    Cartography

    Get PDF
    The terrestrial space is the place of interaction of natural and social systems. The cartography is an essential tool to understand the complexity of these systems, their interaction and evolution. This brings the cartography to an important place in the modern world. The book presents several contributions at different areas and activities showing the importance of the cartography to the perception and organization of the territory. Learning with the past or understanding the present the use of cartography is presented as a way of looking to almost all themes of the knowledge

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Prefetching Tiled Internet Data Using a Neighbor Selection Markov Chain

    No full text

    Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    Get PDF
    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence
    corecore