328 research outputs found

    Block-Recurrent Transformers

    Full text link
    We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.Comment: Update to NeurIPS camera-ready versio

    A survey and classification of storage deduplication systems

    Get PDF
    The automatic elimination of duplicate data in a storage system commonly known as deduplication is increasingly accepted as an effective technique to reduce storage costs. Thus, it has been applied to different storage types, including archives and backups, primary storage, within solid state disks, and even to random access memory. Although the general approach to deduplication is shared by all storage types, each poses specific challenges and leads to different trade-offs and solutions. This diversity is often misunderstood, thus underestimating the relevance of new research and development. The first contribution of this paper is a classification of deduplication systems according to six criteria that correspond to key design decisions: granularity, locality, timing, indexing, technique, and scope. This classification identifies and describes the different approaches used for each of them. As a second contribution, we describe which combinations of these design decisions have been proposed and found more useful for challenges in each storage type. Finally, outstanding research challenges and unexplored design points are identified and discussed.This work is funded by the European Regional Development Fund (EDRF) through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the Fundacao para a Ciencia e a Tecnologia (FCT; Portuguese Foundation for Science and Technology) within project RED FCOMP-01-0124-FEDER-010156 and the FCT by PhD scholarship SFRH-BD-71372-2010

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom

    A Nine Month Progress Report on an Investigation into Mechanisms for Improving Triple Store Performance

    No full text
    This report considers the requirement for fast, efficient, and scalable triple stores as part of the effort to produce the Semantic Web. It summarises relevant information in the major background field of Database Management Systems (DBMS), and provides an overview of the techniques currently in use amongst the triple store community. The report concludes that for individuals and organisations to be willing to provide large amounts of information as openly-accessible nodes on the Semantic Web, storage and querying of the data must be cheaper and faster than it is currently. Experiences from the DBMS field can be used to maximise triple store performance, and suggestions are provided for lines of investigation in areas of storage, indexing, and query optimisation. Finally, work packages are provided describing expected timetables for further study of these topics

    Efficient Cache Attacks on AES, and Countermeasures

    Get PDF
    We describe several software side-channel attacks based on inter-process leakage through the state of the CPU's memory cache. This leakage reveals memory access patterns, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups. The attacks allow an unprivileged process to attack other processes running in parallel on the same processor, despite partitioning methods such as memory protection, sandboxing, and virtualization. Some of our methods require only the ability to trigger services that perform encryption or MAC using the unknown key, such as encrypted disk partitions or secure network links. Moreover, we demonstrate an extremely strong type of attack, which requires knowledge of neither the specific plaintexts nor ciphertexts and works by merely monitoring the effect of the cryptographic process on the cache. We discuss in detail several attacks on AES and experimentally demonstrate their applicability to real systems, such as OpenSSL and Linux's dm-crypt encrypted partitions (in the latter case, the full key was recovered after just 800 writes to the partition, taking 65 milliseconds). Finally, we discuss a variety of countermeasures which can be used to mitigate such attacks

    Real-time quality visualization of medical models on commodity and mobile devices

    Get PDF
    This thesis concerns the specific field of visualization of medical models using commodity and mobile devices. Mechanisms for medical imaging acquisition such as MRI, CT, and micro-CT scanners are continuously evolving, up to the point of obtaining volume datasets of large resolutions (> 512^3). As these datasets grow in resolution, its treatment and visualization become more and more expensive due to their computational requirements. For this reason, special techniques such as data pre-processing (filtering, construction of multi-resolution structures, etc.) and sophisticated algorithms have to be introduced in different points of the visualization pipeline to achieve the best visual quality without compromising performance times. The problem of managing big datasets comes from the fact that we have limited computational resources. Not long ago, the only physicians that were rendering volumes were radiologists. Nowadays, the outcome of diagnosis is the data itself, and medical doctors need to render them in commodity PCs (even patients may want to render the data, and the DVDs are commonly accompanied with a DICOM viewer software). Furthermore, with the increasing use of technology in daily clinical tasks, small devices such as mobile phones and tablets can fit the needs of medical doctors in some specific areas. Visualizing diagnosis images of patients becomes more challenging when it comes to using these devices instead of desktop computers, as they generally have more restrictive hardware specifications. The goal of this Ph.D. thesis is the real-time, quality visualization of medium to large medical volume datasets (resolutions >= 512^3 voxels) on mobile phones and commodity devices. To address this problem, we use multiresolution techniques that apply downsampling techniques on the full resolution datasets to produce coarser representations which are easier to handle. We have focused our efforts on the application of Volume Visualization in the clinical practice, so we have a particular interest in creating solutions that require short pre-processing times that quickly provide the specialists with the data outcome, maximize the preservation of features and the visual quality of the final images, achieve high frame rates that allow interactive visualizations, and make efficient use of the computational resources. The contributions achieved during this thesis comprise improvements in several stages of the visualization pipeline. The techniques we propose are located in the stages of multi-resolution generation, transfer function design and the GPU ray casting algorithm itself.Esta tesis se centra en la visualización de modelos médicos de volumen en dispositivos móviles y de bajas prestaciones. Los sistemas médicos de captación tales como escáners MRI, CT y micro-CT, están en constante evolución, hasta el punto de obtener modelos de volumen de gran resolución (> 512^3). A medida que estos datos crecen en resolución, su manejo y visualización se vuelve más y más costoso debido a sus requisitos computacionales. Por este motivo, técnicas especiales como el pre-proceso de datos (filtrado, construcción de estructuras multiresolución, etc.) y algoritmos específicos se tienen que introducir en diferentes puntos de la pipeline de visualización para conseguir la mejor calidad visual posible sin comprometer el rendimiento. El problema que supone manejar grandes volumenes de datos es debido a que tenemos recursos computacionales limitados. Hace no mucho, las únicas personas en el ámbito médico que visualizaban datos de volumen eran los radiólogos. Hoy en día, el resultado de la diagnosis son los datos en sí, y los médicos necesitan renderizar estos datos en PCs de características modestas (incluso los pacientes pueden querer visualizar estos datos, pues los DVDs con los resultados suelen venir acompañados de un visor de imágenes DICOM). Además, con el reciente aumento del uso de las tecnologías en la clínica práctica habitual, dispositivos pequeños como teléfonos móviles o tablets son los más convenientes en algunos casos. La visualización de volumen es más difícil en este tipo de dispositivos que en equipos de sobremesa, pues las limitaciones de su hardware son superiores. El objetivo de esta tesis doctoral es la visualización de calidad en tiempo real de modelos grandes de volumen (resoluciones >= 512^3 voxels) en teléfonos móviles y dispositivos de bajas prestaciones. Para enfrentarnos a este problema, utilizamos técnicas multiresolución que aplican técnicas de reducción de datos a los modelos en resolución original, para así obtener modelos de menor resolución. Hemos centrado nuestros esfuerzos en la aplicación de la visualización de volumen en la práctica clínica, así que tenemos especial interés en diseñar soluciones que requieran cortos tiempos de pre-proceso para que los especialistas tengan rápidamente los resultados a su disposición. También, queremos maximizar la conservación de detalles de interés y la calidad de las imágenes finales, conseguir frame rates altos que faciliten visualizaciones interactivas y que hagan un uso eficiente de los recursos computacionales. Las contribuciones aportadas por esta tesis són mejoras en varias etapas de la pipeline de visualización. Las técnicas que proponemos se situan en las etapas de generación de la estructura multiresolución, el diseño de la función de transferencia y el algoritmo de ray casting en la GPU

    Real-time quality visualization of medical models on commodity and mobile devices

    Get PDF
    This thesis concerns the specific field of visualization of medical models using commodity and mobile devices. Mechanisms for medical imaging acquisition such as MRI, CT, and micro-CT scanners are continuously evolving, up to the point of obtaining volume datasets of large resolutions (> 512^3). As these datasets grow in resolution, its treatment and visualization become more and more expensive due to their computational requirements. For this reason, special techniques such as data pre-processing (filtering, construction of multi-resolution structures, etc.) and sophisticated algorithms have to be introduced in different points of the visualization pipeline to achieve the best visual quality without compromising performance times. The problem of managing big datasets comes from the fact that we have limited computational resources. Not long ago, the only physicians that were rendering volumes were radiologists. Nowadays, the outcome of diagnosis is the data itself, and medical doctors need to render them in commodity PCs (even patients may want to render the data, and the DVDs are commonly accompanied with a DICOM viewer software). Furthermore, with the increasing use of technology in daily clinical tasks, small devices such as mobile phones and tablets can fit the needs of medical doctors in some specific areas. Visualizing diagnosis images of patients becomes more challenging when it comes to using these devices instead of desktop computers, as they generally have more restrictive hardware specifications. The goal of this Ph.D. thesis is the real-time, quality visualization of medium to large medical volume datasets (resolutions >= 512^3 voxels) on mobile phones and commodity devices. To address this problem, we use multiresolution techniques that apply downsampling techniques on the full resolution datasets to produce coarser representations which are easier to handle. We have focused our efforts on the application of Volume Visualization in the clinical practice, so we have a particular interest in creating solutions that require short pre-processing times that quickly provide the specialists with the data outcome, maximize the preservation of features and the visual quality of the final images, achieve high frame rates that allow interactive visualizations, and make efficient use of the computational resources. The contributions achieved during this thesis comprise improvements in several stages of the visualization pipeline. The techniques we propose are located in the stages of multi-resolution generation, transfer function design and the GPU ray casting algorithm itself.Esta tesis se centra en la visualización de modelos médicos de volumen en dispositivos móviles y de bajas prestaciones. Los sistemas médicos de captación tales como escáners MRI, CT y micro-CT, están en constante evolución, hasta el punto de obtener modelos de volumen de gran resolución (> 512^3). A medida que estos datos crecen en resolución, su manejo y visualización se vuelve más y más costoso debido a sus requisitos computacionales. Por este motivo, técnicas especiales como el pre-proceso de datos (filtrado, construcción de estructuras multiresolución, etc.) y algoritmos específicos se tienen que introducir en diferentes puntos de la pipeline de visualización para conseguir la mejor calidad visual posible sin comprometer el rendimiento. El problema que supone manejar grandes volumenes de datos es debido a que tenemos recursos computacionales limitados. Hace no mucho, las únicas personas en el ámbito médico que visualizaban datos de volumen eran los radiólogos. Hoy en día, el resultado de la diagnosis son los datos en sí, y los médicos necesitan renderizar estos datos en PCs de características modestas (incluso los pacientes pueden querer visualizar estos datos, pues los DVDs con los resultados suelen venir acompañados de un visor de imágenes DICOM). Además, con el reciente aumento del uso de las tecnologías en la clínica práctica habitual, dispositivos pequeños como teléfonos móviles o tablets son los más convenientes en algunos casos. La visualización de volumen es más difícil en este tipo de dispositivos que en equipos de sobremesa, pues las limitaciones de su hardware son superiores. El objetivo de esta tesis doctoral es la visualización de calidad en tiempo real de modelos grandes de volumen (resoluciones >= 512^3 voxels) en teléfonos móviles y dispositivos de bajas prestaciones. Para enfrentarnos a este problema, utilizamos técnicas multiresolución que aplican técnicas de reducción de datos a los modelos en resolución original, para así obtener modelos de menor resolución. Hemos centrado nuestros esfuerzos en la aplicación de la visualización de volumen en la práctica clínica, así que tenemos especial interés en diseñar soluciones que requieran cortos tiempos de pre-proceso para que los especialistas tengan rápidamente los resultados a su disposición. También, queremos maximizar la conservación de detalles de interés y la calidad de las imágenes finales, conseguir frame rates altos que faciliten visualizaciones interactivas y que hagan un uso eficiente de los recursos computacionales. Las contribuciones aportadas por esta tesis són mejoras en varias etapas de la pipeline de visualización. Las técnicas que proponemos se situan en las etapas de generación de la estructura multiresolución, el diseño de la función de transferencia y el algoritmo de ray casting en la GPU.Postprint (published version

    Positional Delta Trees to reconcile updates with read-optimized data storage

    Get PDF
    We investigate techniques that marry the high readonly analytical query performance of compressed, replicated column storage (“read-optimized” databases) with the ability to handle a high-throughput update workload. Today’s large RAM sizes and the growing gap between sequential vs. random IO disk throughput, bring this once elusive goal in reach, as it has become possible to buffer enough updates in memory to allow background migration of these updates to disk, where efficient sequential IO is amortized among many updates. Our key goal is that read-only queries always see the latest database state, yet are not (significantly) slowed down by the update processing. To this end, we propose the Positional Delta Tree (PDT), that is designed to minimize the overhead of on-the-fly merging of differential updates into (index) scans on stale disk-based data. We describe the PDT data structure and its basic operations (lookup, insert, delete, modify) and provide an in-detail study of their performance. Further, we propose a storage architecture called Replicated Mirrors, that replicates tables in multiple orders, storing each table copy mirrored in both column- and row-wise data formats, and uses PDTs to handle updates. Experiments in the MonetDB/X100 system show that this integrated architecture is able to achieve our main goals
    corecore