20 research outputs found
Performance comparison of point and spatial access methods
In the past few years a large number of multidimensional point access methods, also called
multiattribute index structures, has been suggested, all of them claiming good performance. Since no
performance comparison of these structures under arbitrary (strongly correlated nonuniform, short
"ugly") data distributions and under various types of queries has been performed, database
researchers and designers were hesitant to use any of these new point access methods. As shown in
a recent paper, such point access methods are not only important in traditional database applications.
In new applications such as CAD/CIM and geographic or environmental information systems, access
methods for spatial objects are needed. As recently shown such access methods are based on point
access methods in terms of functionality and performance. Our performance comparison naturally
consists of two parts. In part I we w i l l compare multidimensional point access methods, whereas in
part I I spatial access methods for rectangles will be compared. In part I we present a survey and
classification of existing point access methods. Then we carefully select the following four methods
for implementation and performance comparison under seven different data files (distributions) and
various types of queries: the 2-level grid file, the BANG file, the hB-tree and a new scheme, called
the BUDDY hash tree. We were surprised to see one method to be the clear winner which was the
BUDDY hash tree. It exhibits an at least 20 % better average performance than its competitors and is
robust under ugly data and queries. In part I I we compare spatial access methods for rectangles.
After presenting a survey and classification of existing spatial access methods we carefully selected
the following four methods for implementation and performance comparison under six different data
files (distributions) and various types of queries: the R-tree, the BANG file, PLOP hashing and the
BUDDY hash tree. The result presented two winners: the BANG file and the BUDDY hash tree.
This comparison is a first step towards a standardized testbed or benchmark. We offer our data and
query files to each designer of a new point or spatial access method such that he can run his
implementation in our testbed
Recommended from our members
Heuristics and multi-dimensional physical database design
An expert system approach has recently been used in parameter selection for VSAM (Virtual Storage Access Method) file organisation [AL87a]. This system has been developed to aid in-house users to apply relevant facts and heuristics to optimise VSAM file design. Multi-dimensional physical
database design is more sophisticated and complicated than VSAM file design. The expert system approach can be applied to select and tune physical database design for various applications.
A great deal of work has been done in developing diverse algorithms or access methods to organise automated information on secondary storage devices [FA86b] [FR86] [FR88] [GU84] [HU88a] [KS88a] [KS86] [L087] [NI84] [OR88b] [OR86] [OT85] [R081], etc. However, little work has been done to enable designers to select an access method which matches a projected application profile (features and requirements) and perceived strengths and weaknesses of candidate algorithms. This thesis considers a number of grid based algorithms and makes expert assessments of each according to its strengths and weaknesses. It analyses features of various access methods and using expert knowledge matches features for a range of m-d (multi dimensional) algorithms with corresponding characteristics of an application. The knowledge-based system presented in this thesis can be applied either manually or computerised to give a systematic approach to m-d algorithm selection. A system is proposed to (1) heuristically select an initial algorithm; (2) describe how the selection process is evaluated against actual m-d algorithm performance and (3) show how the results of the evaluation can be used to refine expert knowledge embodied in the selection system. Heuristic assessments are given for several m-d access algorithms. Examples are
presented to show how these heuristics are used to select a m-d access algorithm for a specific application. It is reasonable to suppose that the initial heuristic assessments are not entirely accurate. A tuning mechanism for the system heuristics is given in section 4.9. The system selection process is thereby, able to adjust to real world results. Finally, we present a simple example to illustrate how the proposed system works
Accurate sampling-based cardinality estimation for complex graph queries
Accurately estimating the cardinality (i.e., the number of answers) of complex queries plays a central role in
database systems. This problem is particularly difficult in graph databases, where queries often involve a large
number of joins and self-joins. Recently, Park et al. [54] surveyed seven state-of-the-art cardinality estimation
approaches for graph queries. The results of their extensive empirical evaluation show that a sampling method
based on the WanderJoin online aggregation algorithm [46] consistently offers superior accuracy.
We extended the framework by Park et al. [54] with three additional datasets and repeated their experiments.
Our results showed that WanderJoin is indeed very accurate, but it can often take a large number of samples
and thus be very slow. Moreover, when queries are complex and data distributions are skewed, it often fails
to find valid samples and estimates the cardinality as zero. Finally, complex graph queries often go beyond
simple graph matching and involve arbitrary nesting of relational operators such as disjunction, difference,
and duplicate elimination. Neither of the methods considered by Park et al. [54] is applicable to such queries.
In this paper we present a novel approach for estimating the cardinality of complex graph queries. Our
approach is inspired by WanderJoin, but, unlike all approaches known to us, it can process complex queries with
arbitrary operator nesting. Our estimator is strongly consistent, meaning that the average of repeated estimates
converges with probability one to the actual cardinality. We present optimisations of the basic algorithm
that aim to reduce the chance of producing zero estimates and improve accuracy. We show empirically that
our approach is both accurate and quick on complex queries and large datasets. Finally, we discuss how to
integrate our approach into a simple dynamic programming query planner, and we confirm empirically that
our planner produces high-quality plans that can significantly reduce end-to-end query evaluation times
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum