101 research outputs found

    Comparison of Persistent Homologies for Vector Functions: from continuous to discrete and back

    Get PDF
    The theory of multidimensional persistent homology was initially developed in the discrete setting, and involved the study of simplicial complexes filtered through an ordering of the simplices. Later, stability properties of multidimensional persistence have been proved to hold when topological spaces are filtered by continuous functions, i.e. for continuous data. This paper aims to provide a bridge between the continuous setting, where stability properties hold, and the discrete setting, where actual computations are carried out. More precisely, a stability preserving method is developed to compare rank invariants of vector functions obtained from discrete data. These advances confirm that multidimensional persistent homology is an appropriate tool for shape comparison in computer vision and computer graphics applications. The results are supported by numerical tests.Comment: This version contains new experiments, described in section 5.3. The previous version was presented at: Workshop on Computational Topology, November 7-11, 2011, Fields Institute http://amsacta.unibo.it/3205

    Spatial Decompositions for Geometric Interpolation and Efficient Rendering

    Get PDF
    Interpolation is fundamental in many applications that are based on multidimensional scalar or vector fields. In such applications, it is possible to sample points from the field, for example, through the numerical solution of some mathematical model. Because point sampling may be computationally intensive, it is desirable to store samples in a data structure and estimate the values of the field at intermediate points through interpolation. We present methods based on building dynamic spatial data structures in which the samples are computed on-demand, and adaptive strategies are used to avoid oversampling. We first show how to apply this approach to accelerate realistic rendering through ray-tracing. Ray-tracing can be formulated as a sampling and reconstruction problem, where rays in 3-space are modeled as points in a 4-dimensional parameter space. Sample rays are associated with various geometric attributes, which are then used in rendering. We collect and store a relatively sparse set of sampled rays, and use inexpensive interpolation methods to approximate the attribute values for other rays. We present two data structures: (1) the <i>ray interpolant tree (RI-tree)</i>, which is based on a kd-tree-like subdivision of space, and (2) the <i>simplex decomposition tree (SD-tree)</i>, which is based on a hierarchical regular simplicial mesh, and improves the functionality of the RI-tree by guaranteeing continuity. For compact storage as well as efficient neighbor computation in the mesh, we present a pointerless representation of the SD-tree. An essential element of this approach is the development of a location code that enables efficient access and navigation of the data structure. For this purpose we introduce a location code, called an LPTcode, that uniquely encodes the geometry of each simplex of the hierarchy. We present rules to compute the neighbors of a given simplex efficiently through the use of this code. We show how to traverse the associated tree and how to answer point location and interpolation queries. Our algorithms work in arbitrary dimensions. We also demonstrate the use of the SD-tree for rendering atmospheric effects. We present empirical evidence that our methods can produce renderings of good quality significantly faster than simple ray-tracing

    Effective Visualizations of the Uncertainty in Hurricane Forecasts

    Get PDF
    The track forecast cone developed by the U.S. National Hurricane Center is the one most universally adopted by the general public, the news media, and governmental officials to enhance viewers\u27 understanding of the forecasts and their underlying uncertainties. However, current research has experimentally shown that it has limitations that result in misconceptions of the uncertainty included. Most importantly, the area covered by the cone tends to be misinterpreted as the region affected by the hurricane. In addition, the cone summarizes forecasts for the next three days into a single representation and, thus, makes it difficult for viewers to accurately determine crucial time-specific information. To address these limitations, this research develops novel alternative visualizations. It begins by developing a technique that generates and smoothly interpolates robust statistics from ensembles of hurricane predictions, thus creating visualizations that inherently include the spatial uncertainty by displaying three levels of positional storm strike risk at a specific point in time. To address the misconception of the area covered by the cone, this research develops time-specific visualizations depicting spatial information based on a sampling technique that selects a small, representative subset from an ensemble of points. It also allows depictions of such important storm characteristics as size and intensity. Further, this research generalizes the representative sampling framework to process ensembles of forecast tracks, selecting a subset of tracks accurately preserving the original distributions of available storm characteristics and keeping appropriately defined spatial separations. This framework supports an additional hurricane visualization portraying prediction uncertainties implicitly by directly showing the members of the subset without the visual clutter. We collaborated on cognitive studies that suggest that these visualizations enhance viewers\u27 ability to understand the forecasts because they are potentially interpreted more like uncertainty distributions. In addition to benefiting the field of hurricane forecasting, this research potentially enhances the visualization community more generally. For instance, the representative sampling framework for processing 2D points developed here can be applied to enhancing the standard scatter plots and density plots by reducing sizes of data sets. Further, as the idea of direct ensemble displays can possibly be extended to more general numerical simulations, it, thus, has potential impacts on a wide range of ensemble visualizations

    Diamond-based models for scientific visualization

    Get PDF
    Hierarchical spatial decompositions are a basic modeling tool in a variety of application domains including scientific visualization, finite element analysis and shape modeling and analysis. A popular class of such approaches is based on the regular simplex bisection operator, which bisects simplices (e.g. line segments, triangles, tetrahedra) along the midpoint of a predetermined edge. Regular simplex bisection produces adaptive simplicial meshes of high geometric quality, while simplifying the extraction of crack-free, or conforming, approximations to the original dataset. Efficient multiresolution representations for such models have been achieved in 2D and 3D by clustering sets of simplices sharing the same bisection edge into structures called diamonds. In this thesis, we introduce several diamond-based approaches for scientific visualization. We first formalize the notion of diamonds in arbitrary dimensions in terms of two related simplicial decompositions of hypercubes. This enables us to enumerate the vertices, simplices, parents and children of a diamond. In particular, we identify the number of simplices involved in conforming updates to be factorial in the dimension and group these into a linear number of subclusters of simplices that are generated simultaneously. The latter form the basis for a compact pointerless representation for conforming meshes generated by regular simplex bisection and for efficiently navigating the topological connectivity of these meshes. Secondly, we introduce the supercube as a high-level primitive on such nested meshes based on the atomic units within the underlying triangulation grid. We propose the use of supercubes to associate information with coherent subsets of the full hierarchy and demonstrate the effectiveness of such a representation for modeling multiresolution terrain and volumetric datasets. Next, we introduce Isodiamond Hierarchies, a general framework for spatial access structures on a hierarchy of diamonds that exploits the implicit hierarchical and geometric relationships of the diamond model. We use an isodiamond hierarchy to encode irregular updates to a multiresolution isosurface or interval volume in terms of regular updates to diamonds. Finally, we consider nested hypercubic meshes, such as quadtrees, octrees and their higher dimensional analogues, through the lens of diamond hierarchies. This allows us to determine the relationships involved in generating balanced hypercubic meshes and to propose a compact pointerless representation of such meshes. We also provide a local diamond-based triangulation algorithm to generate high-quality conforming simplicial meshes

    Principal component and Voronoi skeleton alternatives for curve reconstruction from noisy point sets

    Get PDF
    Surface reconstruction from noisy point samples must take into consideration the stochastic nature of the sample -- In other words, geometric algorithms reconstructing the surface or curve should not insist in following in a literal way each sampled point -- Instead, they must interpret the sample as a “point cloud” and try to build the surface as passing through the best possible (in the statistical sense) geometric locus that represents the sample -- This work presents two new methods to find a Piecewise Linear approximation from a Nyquist-compliant stochastic sampling of a quasi-planar C1 curve C(u) : R → R3, whose velocity vector never vanishes -- One of the methods articulates in an entirely new way Principal Component Analysis (statistical) and Voronoi-Delaunay (deterministic) approaches -- It uses these two methods to calculate the best possible tape-shaped polygon covering the planarised point set, and then approximates the manifold by the medial axis of such a polygon -- The other method applies Principal Component Analysis to find a direct Piecewise Linear approximation of C(u) -- A complexity comparison of these two methods is presented along with a qualitative comparison with previously developed ones -- It turns out that the method solely based on Principal Component Analysis is simpler and more robust for non self-intersecting curves -- For self-intersecting curves the Voronoi-Delaunay based Medial Axis approach is more robust, at the price of higher computational complexity -- An application is presented in Integration of meshes originated in range images of an art piece -- Such an application reaches the point of complete reconstruction of a unified mes

    Deformable Simplicial Complexes

    Get PDF
    In this dissertation we present a novel method for deformable interface tracking in 2D and 3D|deformable simplicial complexes (DSC). Deformable interfaces are used in several applications, such as fluid simulation, image analysis, reconstruction or structural optimization. In the DSC method, the interface (curve in 2D; surface in 3D) is represented explicitly as a piecewise linear curve or surface. However, the domain is also subject to discretization: triangulation in 2D; tetrahedralization in 3D. This way, the interface can be alternatively represented as a set of edges/triangles separating triangles/tetrahedra marked as outside from those marked as inside. Such an approach allows for robust topological adaptivity. Among other advantages of the deformable simplicial complexes there are: space adaptivity, ability to handle and preserve sharp features, possibility for topology control. We demonstrate those strengths in several applications. In particular, a novel, DSC-based fluid dynamics solver has been developed during the PhD project. A special feature of this solver is that due to the fact that DSC maintains an explicit interface representation, surface tension is more easily dealt with. One particular advantage of DSC is the fact that as an alternative to topology adaptivity, topology control is also possible. This is exploited in the construction of cut loci on tori where a front expands from a single point on a torus and stops when it self-intersects

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D
    • …
    corecore