39 research outputs found

    Partial Replica Location And Selection For Spatial Datasets

    Get PDF
    As the size of scientific datasets continues to grow, we will not be able to store enormous datasets on a single grid node, but must distribute them across many grid nodes. The implementation of partial or incomplete replicas, which represent only a subset of a larger dataset, has been an active topic of research. Partial Spatial Replicas extend this functionality to spatial data, allowing us to distribute a spatial dataset in pieces over several locations. We investigate solutions to the partial spatial replica selection problems. First, we describe and develop two designs for an Spatial Replica Location Service (SRLS), which must return the set of replicas that intersect with a query region. Integrating a relational database, a spatial data structure and grid computing software, we build a scalable solution that works well even for several million replicas. In our SRLS, we have improved performance by designing a R-tree structure in the backend database, and by aggregating several queries into one larger query, which reduces overhead. We also use the Morton Space-filling Curve during R-tree construction, which improves spatial locality. In addition, we describe R-tree Prefetching(RTP), which effectively utilizes the modern multi-processor architecture. Second, we present and implement a fast replica selection algorithm in which a set of partial replicas is chosen from a set of candidates so that retrieval performance is maximized. Using an R-tree based heuristic algorithm, we achieve O(n log n) complexity for this NP-complete problem. We describe a model for disk access performance that takes filesystem prefetching into account and is sufficiently accurate for spatial replica selection. Making a few simplifying assumptions, we present a fast replica selection algorithm for partial spatial replicas. The algorithm uses a greedy approach that attempts to maximize performance by choosing a collection of replica subsets that allow fast data retrieval by a client machine. Experiments show that the performance of the solution found by our algorithm is on average always at least 91% and 93.4% of the performance of the optimal solution in 4-node and 8-node tests respectively

    Data-driven Neuroscience: Enabling Breakthroughs Via Innovative Data Management

    Get PDF
    Scientists in all disciplines increasingly rely on simulations to develop a better understanding of the subject they are studying. For example the neuroscientists we collaborate with in the Blue Brain project have started to simulate the brain on a supercomputer. The level of detail of their models is unprecedented as they model details on the subcellular level (e.g., the neurotransmitter). This level of detail, however, also leads to a true data deluge and the neuroscientists have only few tools to efficiently analyze the data. This demonstration showcases three innovative spatial management solutions that have substantial impact on computational neuroscience and other disciplines in that they allow to build, analyze and simulate bigger and more detailed models. More particularly, we visualize the novel query execution strategy of FLAT, an index for the scalable and efficient execution of range queries on increasingly detailed spatial models. FLAT is used to build and analyze models of the brain. We furthermore demonstrate how SCOUT uses previous query results to prefetch spatial data with high accuracy and therefore speeds up the analysis of spatial models. We finally also demonstrate TOUCH, a novel in-memory spatial join, that speeds up the model building process

    SCOUT: Prefetching for Latent Structure Following Queries

    Get PDF
    Today's scientists are quickly moving from in vitro to in silico experimentation: they no longer analyze natural phenomena in a petri dish, but instead they build models and simulate them. Managing and analyzing the massive amounts of data involved in simulations is a major task. Yet, they lack the tools to efficiently work with data of this size. One problem many scientists share is the analysis of the massive spatial models they build. For several types of analysis they need to interactively follow the structures in the spatial model, e.g., the arterial tree, neuron fibers, etc., and issue range queries along the way. Each query takes long to execute, and the total time for executing a sequence of queries significantly delays data analysis. Prefetching the spatial data reduces the response time considerably, but known approaches do not prefetch with high accuracy. We develop SCOUT, a structure-aware method for prefetching data along interactive spatial query sequences. SCOUT uses an approximate graph model of the structures involved in past queries and attempts to identify what particular structure the user follows. Our experiments with neuroscience data show that SCOUT prefetches with an accuracy from 71% to 92%, which translates to a speedup of 4x-15x. SCOUT also improves the prefetching accuracy on datasets from other scientific domains, such as medicine and biology

    On the classification and evaluation of prefetching schemes

    Get PDF
    Abstract available: p. [2

    Advancement of Computing on Large Datasets via Parallel Computing and Cyberinfrastructure

    Get PDF
    Large datasets require efficient processing, storage and management to efficiently extract useful information for innovation and decision-making. This dissertation demonstrates novel approaches and algorithms using virtual memory approach, parallel computing and cyberinfrastructure. First, we introduce a tailored user-level virtual memory system for parallel algorithms that can process large raster data files in a desktop computer environment with limited memory. The application area for this portion of the study is to develop parallel terrain analysis algorithms that use multi-threading to take advantage of common multi-core processors for greater efficiency. Second, we present two novel parallel WaveCluster algorithms that perform cluster analysis by taking advantage of discrete wavelet transform to reduce large data to coarser representations so data is smaller and more easily managed than the original data in size and complexity. Finally, this dissertation demonstrates an HPC gateway service that abstracts away many details and complexities involved in the use of HPC systems including authentication, authorization, and data and job management

    Flexible multi-layer virtual machine design for virtual laboratory in distributed systems and grids.

    Get PDF
    We propose a flexible Multi-layer Virtual Machine (MVM) design intended to improve efficiencies in distributed and grid computing and to overcome the known current problems that exist within traditional virtual machine architectures and those used in distributed and grid systems. This thesis presents a novel approach to building a virtual laboratory to support e-science by adapting MVMs within the distributed systems and grids, thereby providing enhanced flexibility and reconfigurability by raising the level of abstraction. The MVM consists of three layers. They are OS-level VM, queue VMs, and components VMs. The group of MVMs provides the virtualized resources, virtualized networks, and reconfigurable components layer for virtual laboratories. We demonstrate how our reconfigurable virtual machine can allow software designers and developers to reuse parallel communication patterns. In our framework, the virtual machines can be created on-demand and their applications can be distributed at the source-code level, compiled and instantiated in runtime. (Abstract shortened by UMI.) Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .K56. Source: Masters Abstracts International, Volume: 44-03, page: 1405. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Aplicación de técnicas de aprendizaje automático a la gestión y optimización de cachés de teselas para la aceleración de servicios de mapas en las infraestructuras de datos espaciales

    Get PDF
    La gran proliferación en el uso de servicios de mapas a través de la Web ha motivado la necesidad de disponer de servicios cada vez más escalables. Como respuesta a esta necesidad, los servicios de mapas basados en teselado se han perfilado como una alternativa escalable frente a los servicios de mapas tradicionales, permitiendo la actuación de mecanismos de caché o incluso la prestación del servicio mediante una colección de imágenes pregeneradas. Sin embargo, los requisitos de almacenamiento y tiempo de puesta en marcha de estos servicios resultan a menudo prohibitivos cuando la cartografía a servir cubre una zona geográfica extensa para un elevado número de escalas. Por ello, habitualmente estos servicios se ofrecen recurriendo a cachés parciales que contienen tan solo un subconjunto de la cartografía. Para garantizar una Calidad de Servicio (QoS - Quality of Service) aceptable es necesaria la actuación de adecuadas políticas de mantenimiento y gestión de estas cachés de mapas: 1) Estrategias de población inicial ó seeding de la caché. 2) Algoritmos de carga dinámica ante las peticiones de los usuarios. 3) Políticas de reemplazo de caché. Sin embargo, existe un reducido número de estas estrategias que sean específicas para los servicios de mapas. La mayor parte de estrategias aplicadas a estos servicios son extraídas de otros ámbitos, como los proxies Web tradicionales, las cuáles no tienen en cuenta la componente espacial de los objetos de mapa que gestionan. En la presente tesis se aborda este punto de mejora, diseñando nuevos algoritmos específicos para este dominio de aplicación que permitan optimizar el rendimiento de los servicios de mapas. Dado el elevado número de objetos gestionados por estas cachés y la heterogeneidad de los mismos en cuanto a capas, escalas de representación, etc., se ha hecho un esfuerzo para que las estrategias diseñadas sean automáticas o semi-automáticas, requiriendo la menor intervención humana posible. Así, se han propuesto dos novedosas estrategias para la población inicial de una caché de mapas. Una de ellas utiliza un modelo descriptivo mediante los registros de peticiones pasadas del servicio. La otra se basa en un modelo predictivo para la identificación de fenómenos geográficos directores de las peticiones de los usuarios, parametrizado o bien mediante un análisis regresivo OLS (Ordinary Least Squares) o mediante un sistema inteligente con redes neuronales. Asimismo, se han llevado a cabo importantes contribuciones en relación con las estrategias de reemplazo de estas cachés. Por una parte, se ha propuesto un sistema inteligente basado en redes neuronales, que estima la popularidad de acceso futuro en base a ciertas propiedades de los objetos que gestiona: actualidad de referencia, frecuencia de referencia, y el tamaño de la tesela referenciada. Por otra parte, se ha propuesto una estrategia, bautizada como Spatial-LFU, la cual es una variante de la estrategia Perfect-LFU, simplificada aprovechando la correlación espacial existente entre las peticiones.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería Telemátic
    corecore