359 research outputs found
AT-GIS: highly parallel spatial query processing with associative transducers
Users in many domains, including urban planning, transportation, and environmental science want to execute analytical queries over continuously updated spatial datasets. Current solutions for largescale spatial query processing either rely on extensions to RDBMS, which entails expensive loading and indexing phases when the data changes, or distributed map/reduce frameworks, running on resource-hungry compute clusters. Both solutions struggle with the sequential bottleneck of parsing complex, hierarchical spatial data formats, which frequently dominates query execution time. Our goal is to fully exploit the parallelism offered by modern multicore CPUs for parsing and query execution, thus providing the performance of a cluster with the resources of a single machine. We describe AT-GIS, a highly-parallel spatial query processing system that scales linearly to a large number of CPU cores. ATGIS integrates the parsing and querying of spatial data using a new computational abstraction called associative transducers(ATs). ATs can form a single data-parallel pipeline for computation without requiring the spatial input data to be split into logically independent blocks. Using ATs, AT-GIS can execute, in parallel, spatial query operators on the raw input data in multiple formats, without any pre-processing. On a single 64-core machine, AT-GIS provides 3× the performance of an 8-node Hadoop cluster with 192 cores for containment queries, and 10× for aggregation queries
A Grammar for Reproducible and Painless Extract-Transform-Load Operations on Medium Data
Many interesting data sets available on the Internet are of a medium
size---too big to fit into a personal computer's memory, but not so large that
they won't fit comfortably on its hard disk. In the coming years, data sets of
this magnitude will inform vital research in a wide array of application
domains. However, due to a variety of constraints they are cumbersome to
ingest, wrangle, analyze, and share in a reproducible fashion. These
obstructions hamper thorough peer-review and thus disrupt the forward progress
of science. We propose a predictable and pipeable framework for R (the
state-of-the-art statistical computing environment) that leverages SQL (the
venerable database architecture and query language) to make reproducible
research on medium data a painless reality.Comment: 30 pages, plus supplementary material
The use of alternative data models in data warehousing environments
Data Warehouses are increasing their data volume at an accelerated rate; high disk
space consumption; slow query response time and complex database administration are
common problems in these environments. The lack of a proper data model and an
adequate architecture specifically targeted towards these environments are the root
causes of these problems.
Inefficient management of stored data includes duplicate values at column level and
poor management of data sparsity which derives from a low data density, and affects
the final size of Data Warehouses. It has been demonstrated that the Relational Model
and Relational technology are not the best techniques for managing duplicates and data
sparsity.
The novelty of this research is to compare some data models considering their data
density and their data sparsity management to optimise Data Warehouse environments.
The Binary-Relational, the Associative/Triple Store and the Transrelational models
have been investigated and based on the research results a novel Alternative Data
Warehouse Reference architectural configuration has been defined.
For the Transrelational model, no database implementation existed. Therefore it was
necessary to develop an instantiation of it’s storage mechanism, and as far as could be
determined this is the first public domain instantiation available of the storage
mechanism for the Transrelational model
Integrating analytics with relational databases
The database research community has made tremendous strides in developing powerful database engines that allow for efficient analytical query processing. However, these powerful systems have gone largely unused by analysts and data scientists. This poor adoption is caused primarily by the state of database-client integration. In this thesis we attempt to overcome this challenge by investigating how we can facilitate efficient and painless integration of analytical tools and relational database management systems. We focus our investigation on the three primary methods for database-client integration: client-server connections, in-database processing and embedding the database inside the client application.PROMIMOOCAlgorithms and the Foundations of Software technolog
- …