30,881 research outputs found

    Improved Incremental Randomized Delaunay Triangulation

    Get PDF
    We propose a new data structure to compute the Delaunay triangulation of a set of points in the plane. It combines good worst case complexity, fast behavior on real data, and small memory occupation. The location structure is organized into several levels. The lowest level just consists of the triangulation, then each level contains the triangulation of a small sample of the levels below. Point location is done by marching in a triangulation to determine the nearest neighbor of the query at that level, then the march restarts from that neighbor at the level below. Using a small sample (3%) allows a small memory occupation; the march and the use of the nearest neighbor to change levels quickly locate the query.Comment: 19 pages, 7 figures Proc. 14th Annu. ACM Sympos. Comput. Geom., 106--115, 199

    Analysis of approximate nearest neighbor searching with clustered point sets

    Full text link
    We present an empirical analysis of data structures for approximate nearest neighbor searching. We compare the well-known optimized kd-tree splitting method against two alternative splitting methods. The first, called the sliding-midpoint method, which attempts to balance the goals of producing subdivision cells of bounded aspect ratio, while not producing any empty cells. The second, called the minimum-ambiguity method is a query-based approach. In addition to the data points, it is also given a training set of query points for preprocessing. It employs a simple greedy algorithm to select the splitting plane that minimizes the average amount of ambiguity in the choice of the nearest neighbor for the training points. We provide an empirical analysis comparing these two methods against the optimized kd-tree construction for a number of synthetically generated data and query sets. We demonstrate that for clustered data and query sets, these algorithms can provide significant improvements over the standard kd-tree construction for approximate nearest neighbor searching.Comment: 20 pages, 8 figures. Presented at ALENEX '99, Baltimore, MD, Jan 15-16, 199

    Angle Tree: Nearest Neighbor Search in High Dimensions with Low Intrinsic Dimensionality

    Full text link
    We propose an extension of tree-based space-partitioning indexing structures for data with low intrinsic dimensionality embedded in a high dimensional space. We call this extension an Angle Tree. Our extension can be applied to both classical kd-trees as well as the more recent rp-trees. The key idea of our approach is to store the angle (the "dihedral angle") between the data region (which is a low dimensional manifold) and the random hyperplane that splits the region (the "splitter"). We show that the dihedral angle can be used to obtain a tight lower bound on the distance between the query point and any point on the opposite side of the splitter. This in turn can be used to efficiently prune the search space. We introduce a novel randomized strategy to efficiently calculate the dihedral angle with a high degree of accuracy. Experiments and analysis on real and synthetic data sets shows that the Angle Tree is the most efficient known indexing structure for nearest neighbor queries in terms of preprocessing and space usage while achieving high accuracy and fast search time.Comment: To be submitted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Algorithms for Stable Matching and Clustering in a Grid

    Full text link
    We study a discrete version of a geometric stable marriage problem originally proposed in a continuous setting by Hoffman, Holroyd, and Peres, in which points in the plane are stably matched to cluster centers, as prioritized by their distances, so that each cluster center is apportioned a set of points of equal area. We show that, for a discretization of the problem to an n×nn\times n grid of pixels with kk centers, the problem can be solved in time O(n2log5n)O(n^2 \log^5 n), and we experiment with two slower but more practical algorithms and a hybrid method that switches from one of these algorithms to the other to gain greater efficiency than either algorithm alone. We also show how to combine geometric stable matchings with a kk-means clustering algorithm, so as to provide a geometric political-districting algorithm that views distance in economic terms, and we experiment with weighted versions of stable kk-means in order to improve the connectivity of the resulting clusters.Comment: 23 pages, 12 figures. To appear (without the appendices) at the 18th International Workshop on Combinatorial Image Analysis, June 19-21, 2017, Plovdiv, Bulgari

    Search of low-dimensional magnetics on the basis of structural data: spin-1/2 antiferromagnetic zigzag chain compounds In2VO5, beta-Sr(VOAsO4)2,(NH4,K)2VOF4 and alpha-ZnV3O8

    Full text link
    A new technique for searching low-dimensional compounds on the basis of structural data is presented. The sign and strength of all magnetic couplings at distances up to 12 A in five predicted new antiferromagnetic zigzag spin-1/2 chain compounds In2VO5, beta-Sr(VOAsO4)2, (NH4)2VOF4, K2VOF4 and alpha-ZnV3O8 were calculated. It was stated that in the compound In2VO5 zigzag spin chains are frustrated, since the ratio (J2/J1) of competing antiferromagnetic (AF) nearest- (J1) and AF next-to-nearest-neighbour (J2) couplings is equal to 1.68 that exceeds the Majumdar-Ghosh point by 1/2. In other compounds the zigzag spin chains are AF magnetically ordered single chains as value of ratios J2/J1 is close to zero. The interchain couplings were analyzed in detail.Comment: 14 pages, 6 figure, 1 table, minor change
    corecore