19,831 research outputs found

    A Fast Algorithm for Well-Spaced Points and Approximate Delaunay Graphs

    Get PDF
    We present a new algorithm that produces a well-spaced superset of points conforming to a given input set in any dimension with guaranteed optimal output size. We also provide an approximate Delaunay graph on the output points. Our algorithm runs in expected time O(2O(d)(nlogn+m))O(2^{O(d)}(n\log n + m)), where nn is the input size, mm is the output point set size, and dd is the ambient dimension. The constants only depend on the desired element quality bounds. To gain this new efficiency, the algorithm approximately maintains the Voronoi diagram of the current set of points by storing a superset of the Delaunay neighbors of each point. By retaining quality of the Voronoi diagram and avoiding the storage of the full Voronoi diagram, a simple exponential dependence on dd is obtained in the running time. Thus, if one only wants the approximate neighbors structure of a refined Delaunay mesh conforming to a set of input points, the algorithm will return a size 2O(d)m2^{O(d)}m graph in 2O(d)(nlogn+m)2^{O(d)}(n\log n + m) expected time. If mm is superlinear in nn, then we can produce a hierarchically well-spaced superset of size 2O(d)n2^{O(d)}n in 2O(d)nlogn2^{O(d)}n\log n expected time.Comment: Full versio

    DAzLE: The Dark Ages z (redshift) Lyman-alpha Explorer

    Full text link
    DAzLE is an near infrared narrowband differential imager being built by the Institute of Astronomy, Cambridge, in collaboration with the Anglo-Australian observatory. It is a special purpose instrument designed with a sole aim; the detection of redshifted Lyman-alpha emission from star forming galaxies at z>7. DAzLE will use pairs of high resolution (R=1000) narrowband filters to exploit low background `windows' in the near infrared sky emission spectrum. This will enable it to reach sensitivities of ~2E-21 W/m^2, thereby allowing the detection of z>7 galaxies with star formation rates as low as a few solar masses per year. The design of the instrument, and in particular the crucial narrowband filters, are presented. The predicted performance of DAzLE, including the sensitivity, volume coverage and expected number counts, is discussed. The current status of the DAzLE project, and its projected timeline, are also presented.Comment: 11 pages, 7 figures, to appear in Proceedings of SPIE Vol. 5492, Ground-based Instrumentation for Astronom

    On the complexity of range searching among curves

    Full text link
    Modern tracking technology has made the collection of large numbers of densely sampled trajectories of moving objects widely available. We consider a fundamental problem encountered when analysing such data: Given nn polygonal curves SS in Rd\mathbb{R}^d, preprocess SS into a data structure that answers queries with a query curve qq and radius ρ\rho for the curves of SS that have \Frechet distance at most ρ\rho to qq. We initiate a comprehensive analysis of the space/query-time trade-off for this data structuring problem. Our lower bounds imply that any data structure in the pointer model model that achieves Q(n)+O(k)Q(n) + O(k) query time, where kk is the output size, has to use roughly Ω((n/Q(n))2)\Omega\left((n/Q(n))^2\right) space in the worst case, even if queries are mere points (for the discrete \Frechet distance) or line segments (for the continuous \Frechet distance). More importantly, we show that more complex queries and input curves lead to additional logarithmic factors in the lower bound. Roughly speaking, the number of logarithmic factors added is linear in the number of edges added to the query and input curve complexity. This means that the space/query time trade-off worsens by an exponential factor of input and query complexity. This behaviour addresses an open question in the range searching literature: whether it is possible to avoid the additional logarithmic factors in the space and query time of a multilevel partition tree. We answer this question negatively. On the positive side, we show we can build data structures for the \Frechet distance by using semialgebraic range searching. Our solution for the discrete \Frechet distance is in line with the lower bound, as the number of levels in the data structure is O(t)O(t), where tt denotes the maximal number of vertices of a curve. For the continuous \Frechet distance, the number of levels increases to O(t2)O(t^2)

    Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder

    Full text link
    In this paper, we present a hierarchical path planning framework called SG-RL (subgoal graphs-reinforcement learning), to plan rational paths for agents maneuvering in continuous and uncertain environments. By "rational", we mean (1) efficient path planning to eliminate first-move lags; (2) collision-free and smooth for agents with kinematic constraints satisfied. SG-RL works in a two-level manner. At the first level, SG-RL uses a geometric path-planning method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract paths, also called subgoal sequences. At the second level, SG-RL uses an RL method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal motion-planning policies which can generate kinematically feasible and collision-free trajectories between adjacent subgoals. The first advantage of the proposed method is that SSG can solve the limitations of sparse reward and local minima trap for RL agents; thus, LSPI can be used to generate paths in complex environments. The second advantage is that, when the environment changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI can deal with uncertainties by exploiting its generalization ability to handle changes in environments. Simulation experiments in representative scenarios demonstrate that, compared with existing methods, SG-RL can work well on large-scale maps with relatively low action-switching frequencies and shorter path lengths, and SG-RL can deal with small changes in environments. We further demonstrate that the design of reward functions and the types of training environments are important factors for learning feasible policies.Comment: 20 page

    Optimization of Spatial Joins Using Filters

    Get PDF
    When viewing present-day technical applications that rely on the use of database systems, one notices that new techniques must be integrated in database management systems to be able to support these applications efficiently. This paper discusses one of these techniques in the context of supporting a Geographic Information System. It is known that the use of filters on geometric objects has a significant impact on the processing of 2-way spatial join queries. For this purpose, filters require approximations of objects. Queries can be optimized by filtering data not with just one but with several filters. Existing join methods are based on a combination of filters and a spatial index. The index is used to reduce the cost of the filter step and to minimize the cost of retrieving geometric objects from disk. In this paper we examine n-way spatial joins. Complex n-way spatial join queries require solving several 2-way joins of intermediate results. In this case, not only the profit gained from using both filters and spatial indices but also the additional cost due to using these techniques are examined. For 2-way joins of base relations these costs are considered part of physical database design. We focus on the criteria for mutually comparing filters and not on those for spatial indices. Important aspects of a multi-step filter-based n-way spatial join method are described together with performance experiments. The winning join method uses several filters with approximations that are constructed by rotating two parallel lines around the object
    corecore