132 research outputs found
The impact of spatial data redundancy on SOLAP query performance
Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.FAPESPCNPqCoordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES)INEPFINE
Algorithms and Data Structures for Automated Change Detection and Classification of Sidescan Sonar Imagery
During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author\u27s Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3 – 48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author\u27s repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the CAS performs geospatial searching 1.75x faster on large data sets. Finally, the concluding chapter of this dissertation gives important details on how the completed ACDC system will function, and discusses the author\u27s future research to develop additional algorithms and data structures for ACDC
Query Workload-Aware Index Structures for Range Searches in 1D, 2D, and High-Dimensional Spaces
abstract: Most current database management systems are optimized for single query execution.
Yet, often, queries come as part of a query workload. Therefore, there is a need
for index structures that can take into consideration existence of multiple queries in a
query workload and efficiently produce accurate results for the entire query workload.
These index structures should be scalable to handle large amounts of data as well as
large query workloads.
The main objective of this dissertation is to create and design scalable index structures
that are optimized for range query workloads. Range queries are an important
type of queries with wide-ranging applications. There are no existing index structures
that are optimized for efficient execution of range query workloads. There are
also unique challenges that need to be addressed for range queries in 1D, 2D, and
high-dimensional spaces. In this work, I introduce novel cost models, index selection
algorithms, and storage mechanisms that can tackle these challenges and efficiently
process a given range query workload in 1D, 2D, and high-dimensional spaces. In particular,
I introduce the index structures, HCS (for 1D spaces), cSHB (for 2D spaces),
and PSLSH (for high-dimensional spaces) that are designed specifically to efficiently
handle range query workload and the unique challenges arising from their respective
spaces. I experimentally show the effectiveness of the above proposed index structures
by comparing with state-of-the-art techniques.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Low-latency, query-driven analytics over voluminous multidimensional, spatiotemporal datasets
2017 Summer.Includes bibliographical references.Ubiquitous data collection from sources such as remote sensing equipment, networked observational devices, location-based services, and sales tracking has led to the accumulation of voluminous datasets; IDC projects that by 2020 we will generate 40 zettabytes of data per year, while Gartner and ABI estimate 20-35 billion new devices will be connected to the Internet in the same time frame. The storage and processing requirements of these datasets far exceed the capabilities of modern computing hardware, which has led to the development of distributed storage frameworks that can scale out by assimilating more computing resources as necessary. While challenging in its own right, storing and managing voluminous datasets is only the precursor to a broader field of study: extracting knowledge, insights, and relationships from the underlying datasets. The basic building block of this knowledge discovery process is analytic queries, encompassing both query instrumentation and evaluation. This dissertation is centered around query-driven exploratory and predictive analytics over voluminous, multidimensional datasets. Both of these types of analysis represent a higher-level abstraction over classical query models; rather than indexing every discrete value for subsequent retrieval, our framework autonomously learns the relationships and interactions between dimensions in the dataset (including time series and geospatial aspects), and makes the information readily available to users. This functionality includes statistical synopses, correlation analysis, hypothesis testing, probabilistic structures, and predictive models that not only enable the discovery of nuanced relationships between dimensions, but also allow future events and trends to be predicted. This requires specialized data structures and partitioning algorithms, along with adaptive reductions in the search space and management of the inherent trade-off between timeliness and accuracy. The algorithms presented in this dissertation were evaluated empirically on real-world geospatial time-series datasets in a production environment, and are broadly applicable across other storage frameworks
SOLAM: A Novel Approach of Spatial Aggregation in SOLAP Systems
In the context of a data driven approach aimed to detect the real and responsible factors of the transmission of diseases and explaining its emergence or re-emergence, we suggest SOLAM (Spatial on Line Analytical Mining) system, an extension of Spatial On Line Analytical Processing (SOLAP) with Spatial Data Mining (SDM) techniques. Our approach consists of integrating EPISOLAP system, tailored for epidemiological surveillance, with spatial generalization method allowing the predictive evaluation of health risk in the presence of hazards and awareness of the vulnerability of the exposed population. The proposed architecture is a single integrated decision-making platform of knowledge discovery from spatial databases. Spatial generalization methods allow exploring the data at different semantic and spatial scales while reducing the unnecessary dimensions. The principle of the method is selecting and deleting attributes of low importance in data characterization, thus produces zones of homogeneous characteristics that will be merged
Representation and Exploitation of Event Sequences
Programa Oficial de Doutoramento en Computación . 5009V01[Abstract]
The Ten Commandments, the thirty best smartphones in the market and
the five most wanted people by the FBI. Our life is ruled by sequences:
thought sequences, number sequences, event sequences. . . a history book
is nothing more than a compilation of events and our favorite film is
just a sequence of scenes. All of them have something in common, it
is possible to acquire relevant information from them. Frequently, by
accumulating some data from the elements of each sequence we may
access hidden information (e.g. the passengers transported by a bus
on a journey is the sum of the passengers who got on in the sequence
of stops made); other times, reordering the elements by any of their
characteristics facilitates the access to the elements of interest (e.g. the
publication of books in 2019 can be ordered chronologically, by author,
by literary genre or even by a combination of characteristics); but it
will always be sought to store them in the smallest space possible.
Thus, this thesis proposes technological solutions for the storage
and subsequent processing of events, focusing specifically on three
fundamental aspects that can be found in any application that needs
to manage them: compressed and dynamic storage, aggregation
or accumulation of elements of the sequence and element sequence
reordering by their different characteristics or dimensions.
The first contribution of this work is a compact structure for the
dynamic compression of event sequences. This structure allows any
sequence to be compressed in a single pass, that is, it is capable of
compressing in real time as elements arrive. This contribution is
a milestone in the world of compression since, to date, this is the
first proposal for a variable-to-variable dynamic compressor for general purpose.
Regarding aggregation, a data warehouse-like proposal is presented
capable of storing information on any characteristic of the events in a
sequence in an aggregated, compact and accessible way. Following the
philosophy of current data warehouses, we avoid repeating cumulative
operations and speed up aggregate queries by preprocessing the
information and keeping it in this separate structure.
Finally, this thesis addresses the problem of indexing event sequences
considering their different characteristics and possible reorderings. A new
approach for simultaneously keeping the elements of a sequence ordered
by different characteristics is presented through compact structures.
Thus, it is possible to consult the information and perform operations
on the elements of the sequence using any possible rearrangement in a
simple and efficient way.[Resumen]
Los diez mandamientos, los treinta mejores móviles del mercado y las
cinco personas más buscadas por el FBI. Nuestra vida está gobernada
por secuencias: secuencias de pensamientos, secuencias de números,
secuencias de eventos. . . un libro de historia no es más que una sucesión
de eventos y nuestra película favorita no es sino una secuencia de
escenas. Todas ellas tienen algo en común, de todas podemos extraer
información relevante. A veces, al acumular algún dato de los elementos
de cada secuencia accedemos a información oculta (p. ej. los viajeros
transportados por un autobús en un trayecto es la suma de los pasajeros
que se subieron en la secuencia de paradas realizadas); otras veces, la
reordenación de los elementos por alguna de sus características facilita
el acceso a los elementos de interés (p. ej. la publicación de obras
literarias en 2019 puede ordenarse cronológicamente, por autor, por
género literario o incluso por una combinación de características); pero
siempre se buscará almacenarlas en el espacio más reducido posible sin
renunciar a su contenido.
Por ello, esta tesis propone soluciones tecnológicas para el almacenamiento
y posterior procesamiento de secuencias, centrándose
concretamente en tres aspectos fundamentales que se pueden encontrar
en cualquier aplicación que precise gestionarlas: el almacenamiento
comprimido y dinámico, la agregación o acumulación de algún dato
sobre los elementos de la secuencia y la reordenación de los elementos
de la secuencia por sus diferentes características o dimensiones.
La primera contribución de este trabajo es una estructura compacta
para la compresión dinámica de secuencias. Esta estructura permite
comprimir cualquier secuencia en una sola pasada, es decir, es capaz de comprimir en tiempo real a medida que llegan los elementos de la
secuencia. Esta aportación es un hito en el mundo de la compresión ya
que, hasta la fecha, es la primera propuesta de un compresor dinámico
“variable to variable” de carácter general.
En cuanto a la agregación, se presenta una propuesta de almacén
de datos capaz de guardar la información acumulada sobre alguna
característica de los eventos de la secuencia de modo compacto y
fácilmente accesible. Siguiendo la filosofía de los actuales almacenes de
datos, el objetivo es evitar repetir operaciones de acumulación y agilizar
las consultas agregadas mediante el preprocesado de la información
manteniéndola en esta estructura.
Por último, esta tesis aborda el problema de la indexación de
secuencias de eventos considerando sus diferentes características y
posibles reordenaciones. Se presenta una nueva forma de mantener
simultáneamente ordenados los elementos de una secuencia por diferentes
características a través de estructuras compactas. Así se permite
consultar la información y realizar operaciones sobre los elementos
de la secuencia usando cualquier posible ordenación de una manera
sencilla y eficiente
Compact data structures for large and complex datasets
Programa Oficial de Doutoramento en Computación . 5009V01[Abstract]
In this thesis, we study the problem of processing large and complex collections of
data, presenting new data structures and algorithms that allow us to efficiently store
and analyze them. We focus on three main domains: processing of multidimensional
data, representation of spatial information, and analysis of scientific data.
The common nexus is the use of compact data structures, which combine in a
unique data structure a compressed representation of the data and the structures to
access such data. The target is to be able to manage data directly in compressed
form, and in this way, to keep data always compressed, even in main memory. With
this, we obtain two benefits: we can manage larger datasets in main memory and
we take advantage of a better usage of the memory hierarchy.
In the first part, we propose a compact data structure for multidimensional
databases where the domains of each dimension are hierarchical. It allows efficient
queries of aggregate information at different levels of each dimension. A typical
application environment for our solution would be an OLAP system.
Second, we focus on the representation of spatial information, specifically on
raster data, which are commonly used in geographic information systems (GIS) to
represent spatial attributes (such as the altitude of a terrain, the average temperature,
etc.). The new method enables several typical spatial queries with better response
times than the state of the art, at the same time that saves space in both main
memory and disk. Besides, we also present a framework to run a spatial join between
raster and vector datasets, that uses the compact data structure previously presented
in this part of the thesis.
Finally, we present a solution for the computation of empirical moments from a
set of trajectories of a continuous time stochastic process observed in a given period
of time. The empirical autocovariance function is an example of such operations.
In this thesis, we propose a method that compresses sequences of floating numbers
representing Brownian motion trajectories, although it can be used in other similar
areas. In addition, we also introduce a new algorithm for the calculation of the
autocovariance that uses a single trajectory at a time, instead of loading the whole
dataset, reducing the memory consumption during the calculation process.[Resumo]
Nesta tese estudamos o problema de procesar grandes coleccións de datos,
presentando novas estruturas de datos compactas e algoritmos que nos permiten
almacenalas e analizalas de forma eficiente. Centrámonos en tres dominios principais:
procesamento de datos multidimensionais, representación de información espacial e
análise de datos científicos.
O nexo común é o uso de estruturas de datos compactas, que combinan nunha
única estrutura de datos unha representación comprimida dos datos e as estruturas
para acceder a tales datos. O obxectivo é poder manipular os datos directamente en
forma comprimida, e desta maneira, manter os datos sempre comprimidos, incluso na
memoria principal. Con esto obtemos dous beneficios: podemos xestionar conxuntos
de datos máis grandes na memoria principal e aproveitar un mellor uso da xerarquía
da memoria.
Na primera parte propoñemos unha estructura de datos compacta para bases de
datos multidimensionais onde os dominios de cada dimensión están xerarquizados.
Permítenos consultar eficientemente a información agregada (sumar valor máximo,
etc) a diferentes niveis de cada dimensión. Un entorno de aplicación típico para a
nosa solución sería un sistema OLAP.
En segundo lugar, centrámonos na representación de información espacial,
especificamente en datos ráster, que se utilizan comunmente en sistemas de
información xeográfica (SIX) para representar atributos espaciais (como a altitude
dun terreo, a temperatura media, etc.). O novo método permite realizar
eficientemente varias consultas espaciais típicas con tempos de resposta mellores que
o estado da arte, ao mesmo tempo que reduce o espazo utilizado tanto na memoria
principal como no disco. Ademais, tamén presentamos un marco de traballo para
realizar un join espacial entre conxuntos de datos vectoriais e ráster, que usa a
estructura de datos compacta previamente presentada nesta parte da tese.
Por último, presentamos unha solución para o cálculo de momentos empíricos
a partir dun conxunto de traxectorias dun proceso estocástico de tempo continuo
observadas nun período de tempo dado. A función de autocovarianza empírica
é un exemplo de tales operacións. Nesta tese propoñemos un método que
comprime secuencias de números flotantes que representan traxectorias de movemento Browniano, aínda que pode ser empregado noutras áreas similares. Ademais, tamén
introducimos un novo algoritmo para o cálculo da autocovarianza que emprega unha
única traxectoria á vez, en lugar de cargar todo o conxunto de datos, reducindo o
consumo de memoria durante o proceso de cálculo.[Resumen]
En esta tesis estudiamos el problema de procesar grandes colecciones de datos,
presentando nuevas estructuras de datos compactas y algoritmos que nos permiten
almacenarlas y analizarlas de forma eficiente. Nos centramos principalmente en tres
dominios: procesamiento de datos multidimensionales, representación de información
espacial y análisis de datos científicos.
El nexo común es el uso de estructuras de datos compactas, que combinan en
una única estructura de datos una representación comprimida de los datos y las
estructuras para acceder a dichos datos. El objetivo es poder manipular los datos
directamente en forma comprimida, y de esta manera, mantener los datos siempre
comprimidos, incluso en la memoria principal. Con esto obtenemos dos beneficios:
podemos gestionar conjuntos de datos más grandes en la memoria principal y
aprovechar un mejor uso de la jerarquía de la memoria.
En la primera parte proponemos una estructura de datos compacta para bases de
datos multidimensionales donde los dominios de cada dimensión están jerarquizados.
Nos permite consultar eficientemente la información agregada (suma, valor máximo,
etc.) a diferentes niveles de cada dimensión. Un entorno de aplicación típico para
nuestra solución sería un sistema OLAP.
En segundo lugar, nos centramos en la representación de la información espacial,
específicamente en datos ráster, que se utilizan comúnmente en sistemas de
información geográfica (SIG) para representar atributos espaciales (como la altitud
de un terreno, la temperatura media, etc.). El nuevo método permite realizar
eficientemente varias consultas espaciales típicas con tiempos de respuesta mejores
que el estado del arte, al mismo tiempo que reduce el espacio utilizado tanto en la
memoria principal como en el disco. Además, también presentamos un marco de
trabajo para realizar un join espacial entre conjuntos de datos vectoriales y ráster,
que usa la estructura de datos compacta previamente presentada en esta parte de la
tesis.
Por último, presentamos una solución para el cálculo de momentos empíricos a
partir de un conjunto de trayectorias de un proceso estocástico de tiempo continuo
observadas en un período de tiempo dado. La función de autocovariancia empírica
es un ejemplo de tales operaciones. En esta tesis proponemos un método que comprime secuencias de números flotantes que representan trayectorias de movimiento
Browniano, aunque puede ser utilizado en otras áreas similares. En esta parte,
también introducimos un nuevo algoritmo para el cálculo de la autocovariancia que
utiliza una única trayectoria a la vez, en lugar de cargar todo el conjunto de datos,
reduciendo el consumo de memoria durante el proceso de cálculoXunta de Galicia; ED431G/01Ministerio de Economía y Competitividad ;TIN2016-78011-C4-1-RMinisterio de Economía y Competitividad; TIN2016-77158-C4-3-RMinisterio de Economía y Competitividad; TIN2013-46801-C4-3-RCentro para el desarrollo Tecnológico e Industrial; IDI-20141259Centro para el desarrollo Tecnológico e Industrial; ITC-20151247Xunta de Galicia; GRC2013/05
- …