424 research outputs found
The Ubiquitous B-tree: Volume II
Major developments relating to the B-tree from early 1979 through the fall of 1986 are presented. This updates the well-known article, The Ubiquitous B-Tree by Douglas Comer (Computing Surveys, June 1979). After a basic overview of B and B+ trees, recent research is cited as well as descriptions of nine B-tree variants developed since Comer\u27s article. The advantages and disadvantages of each variant over the basic B-tree are emphasized. Also included are a discussion of concurrency control issues in B-trees and a speculation on the future of B-trees
Search and Rescue under the Forest Canopy using Multiple UAVs
We present a multi-robot system for GPS-denied search and rescue under the
forest canopy. Forests are particularly challenging environments for
collaborative exploration and mapping, in large part due to the existence of
severe perceptual aliasing which hinders reliable loop closure detection for
mutual localization and map fusion. Our proposed system features unmanned
aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning.
When communication is available, each UAV transmits compressed tree-based
submaps to a central ground station for collaborative simultaneous localization
and mapping (CSLAM). To overcome high measurement noise and perceptual
aliasing, we use the local configuration of a group of trees as a distinctive
feature for robust loop closure detection. Furthermore, we propose a novel
procedure based on cycle consistent multiway matching to recover from incorrect
pairwise data associations. The returned global data association is guaranteed
to be cycle consistent, and is shown to improve both precision and recall
compared to the input pairwise associations. The proposed multi-UAV system is
validated both in simulation and during real-world collaborative exploration
missions at NASA Langley Research Center.Comment: IJRR revisio
evtree: Evolutionary Learning of Globally Optimal Classification and Regression Trees in R
Commonly used classification and regression tree methods like the CART algorithm are recursive partitioning methods that build the model in a forward stepwise search. Although this approach is known to be an efficient heuristic, the results of recursive tree methods are only locally optimal, as splits are chosen to maximize homogeneity at the next step only. An alternative way to search over the parameter space of trees is to use global optimization methods like evolutionary algorithms. This paper describes the "evtree" package, which implements an evolutionary algorithm for learning globally optimal classification and regression trees in R. Computationally intensive tasks are fully computed in C++ while the "partykit" (Hothorn and Zeileis 2011) package is leveraged for representing the resulting trees in R, providing unified infrastructure for summaries, visualizations, and predictions. "evtree" is compared to "rpart" (Therneau and Atkinson 1997), the open-source CART implementation, and conditional inference trees ("ctree", Hothorn, Hornik, and Zeileis 2006). The usefulness of "evtree" is illustrated in a textbook customer classification task and a benchmark study of predictive accuracy in which "evtree" achieved at least similar and most of the time better results compared to the recursive algorithms "rpart" and "ctree".machine learning, classification trees, regression trees, evolutionary algorithms, R
The PGM-index: a multicriteria, compressed and learned approach to data indexing
The recent introduction of learned indexes has shaken the foundations of the
decades-old field of indexing data structures. Combining, or even replacing,
classic design elements such as B-tree nodes with machine learning models has
proven to give outstanding improvements in the space footprint and time
efficiency of data systems. However, these novel approaches are based on
heuristics, thus they lack any guarantees both in their time and space
requirements. We propose the Piecewise Geometric Model index (shortly,
PGM-index), which achieves guaranteed I/O-optimality in query operations,
learns an optimal number of linear models, and its peculiar recursive
construction makes it a purely learned data structure, rather than a hybrid of
traditional and learned indexes (such as RMI and FITing-tree). We show that the
PGM-index improves the space of the FITing-tree by 63.3% and of the B-tree by
more than four orders of magnitude, while achieving their same or even better
query time efficiency. We complement this result by proposing three variants of
the PGM-index. First, we design a compressed PGM-index that further reduces its
space footprint by exploiting the repetitiveness at the level of the learned
linear models it is composed of. Second, we design a PGM-index that adapts
itself to the distribution of the queries, thus resulting in the first known
distribution-aware learned index to date. Finally, given its flexibility in the
offered space-time trade-offs, we propose the multicriteria PGM-index that
efficiently auto-tune itself in a few seconds over hundreds of millions of keys
to the possibly evolving space-time constraints imposed by the application of
use.
We remark to the reader that this paper is an extended and improved version
of our previous paper titled "Superseding traditional indexes by orchestrating
learning and geometry" (arXiv:1903.00507).Comment: We remark to the reader that this paper is an extended and improved
version of our previous paper titled "Superseding traditional indexes by
orchestrating learning and geometry" (arXiv:1903.00507
- …