7,733 research outputs found

    Avoiding the Global Sort: A Faster Contour Tree Algorithm

    Get PDF
    We revisit the classical problem of computing the \emph{contour tree} of a scalar field f:M→Rf:\mathbb{M} \to \mathbb{R}, where M\mathbb{M} is a triangulated simplicial mesh in Rd\mathbb{R}^d. The contour tree is a fundamental topological structure that tracks the evolution of level sets of ff and has numerous applications in data analysis and visualization. All existing algorithms begin with a global sort of at least all critical values of ff, which can require (roughly) Ω(nlog⁥n)\Omega(n\log n) time. Existing lower bounds show that there are pathological instances where this sort is required. We present the first algorithm whose time complexity depends on the contour tree structure, and avoids the global sort for non-pathological inputs. If CC denotes the set of critical points in M\mathbb{M}, the running time is roughly O(∑v∈Clog⁡ℓv)O(\sum_{v \in C} \log \ell_v), where ℓv\ell_v is the depth of vv in the contour tree. This matches all existing upper bounds, but is a significant improvement when the contour tree is short and fat. Specifically, our approach ensures that any comparison made is between nodes in the same descending path in the contour tree, allowing us to argue strong optimality properties of our algorithm. Our algorithm requires several novel ideas: partitioning M\mathbb{M} in well-behaved portions, a local growing procedure to iteratively build contour trees, and the use of heavy path decompositions for the time complexity analysis

    Maintaining Contour Trees of Dynamic Terrains

    Full text link
    We consider maintaining the contour tree T\mathbb{T} of a piecewise-linear triangulation M\mathbb{M} that is the graph of a time varying height function h:R2→Rh: \mathbb{R}^2 \rightarrow \mathbb{R}. We carefully describe the combinatorial change in T\mathbb{T} that happen as hh varies over time and how these changes relate to topological changes in M\mathbb{M}. We present a kinetic data structure that maintains the contour tree of hh over time. Our data structure maintains certificates that fail only when h(v)=h(u)h(v)=h(u) for two adjacent vertices vv and uu in M\mathbb{M}, or when two saddle vertices lie on the same contour of M\mathbb{M}. A certificate failure is handled in O(log⁡(n))O(\log(n)) time. We also show how our data structure can be extended to handle a set of general update operations on M\mathbb{M} and how it can be applied to maintain topological persistence pairs of time varying functions

    Localization in Unstructured Environments: Towards Autonomous Robots in Forests with Delaunay Triangulation

    Full text link
    Autonomous harvesting and transportation is a long-term goal of the forest industry. One of the main challenges is the accurate localization of both vehicles and trees in a forest. Forests are unstructured environments where it is difficult to find a group of significant landmarks for current fast feature-based place recognition algorithms. This paper proposes a novel approach where local observations are matched to a general tree map using the Delaunay triangularization as the representation format. Instead of point cloud based matching methods, we utilize a topology-based method. First, tree trunk positions are registered at a prior run done by a forest harvester. Second, the resulting map is Delaunay triangularized. Third, a local submap of the autonomous robot is registered, triangularized and matched using triangular similarity maximization to estimate the position of the robot. We test our method on a dataset accumulated from a forestry site at Lieksa, Finland. A total length of 2100\,m of harvester path was recorded by an industrial harvester with a 3D laser scanner and a geolocation unit fixed to the frame. Our experiments show a 12\,cm s.t.d. in the location accuracy and with real-time data processing for speeds not exceeding 0.5\,m/s. The accuracy and speed limit is realistic during forest operations

    Sea-Rise Flooding on Massive Dynamic Terrains

    Get PDF

    Exploring multiple viewshed analysis using terrain features and optimisation techniques

    Get PDF
    The calculation of viewsheds is a routine operation in geographic information systems and is used in a wide range of applications. Many of these involve the siting of features, such as radio masts, which are part of a network and yet the selection of sites is normally done separately for each feature. The selection of a series of locations which collectively maximise the visual coverage of an area is a combinatorial problem and as such cannot be directly solved except for trivial cases. In this paper, two strategies for tackling this problem are explored. The first is to restrict the search to key topographic points in the landscape such as peaks, pits and passes. The second is to use heuristics which have been applied to other maximal coverage spatial problems such as location-allocation. The results show that the use of these two strategies results in a reduction of the computing time necessary by two orders of magnitude, but at the cost of a loss of 10% in the area viewed. Three different heuristics were used, of which Simulated Annealing produced the best results. However the improvement over a much simpler fast-descent swap heuristic was very slight, but at the cost of greatly increased running times. © 2004 Elsevier Ltd. All rights reserved

    Surface networks

    Get PDF
    © Copyright CASA, UCL. The desire to understand and exploit the structure of continuous surfaces is common to researchers in a range of disciplines. Few examples of the varied surfaces forming an integral part of modern subjects include terrain, population density, surface atmospheric pressure, physico-chemical surfaces, computer graphics, and metrological surfaces. The focus of the work here is a group of data structures called Surface Networks, which abstract 2-dimensional surfaces by storing only the most important (also called fundamental, critical or surface-specific) points and lines in the surfaces. Surface networks are intelligent and “natural ” data structures because they store a surface as a framework of “surface ” elements unlike the DEM or TIN data structures. This report presents an overview of the previous works and the ideas being developed by the authors of this report. The research on surface networks has fou

    The GeoClaw software for depth-averaged flows with adaptive refinement

    Full text link
    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude--longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis, dam break problems, and storm surge. Documentation and download information is available at www.clawpack.org/geoclawComment: 18 pages, 11 figures, Animations and source code for some examples at http://www.clawpack.org/links/awr10 Significantly modified from original posting to incorporate suggestions of referee
    • 

    corecore