609 research outputs found

    Spatial Decompositions for Geometric Interpolation and Efficient Rendering

    Get PDF
    Interpolation is fundamental in many applications that are based on multidimensional scalar or vector fields. In such applications, it is possible to sample points from the field, for example, through the numerical solution of some mathematical model. Because point sampling may be computationally intensive, it is desirable to store samples in a data structure and estimate the values of the field at intermediate points through interpolation. We present methods based on building dynamic spatial data structures in which the samples are computed on-demand, and adaptive strategies are used to avoid oversampling. We first show how to apply this approach to accelerate realistic rendering through ray-tracing. Ray-tracing can be formulated as a sampling and reconstruction problem, where rays in 3-space are modeled as points in a 4-dimensional parameter space. Sample rays are associated with various geometric attributes, which are then used in rendering. We collect and store a relatively sparse set of sampled rays, and use inexpensive interpolation methods to approximate the attribute values for other rays. We present two data structures: (1) the <i>ray interpolant tree (RI-tree)</i>, which is based on a kd-tree-like subdivision of space, and (2) the <i>simplex decomposition tree (SD-tree)</i>, which is based on a hierarchical regular simplicial mesh, and improves the functionality of the RI-tree by guaranteeing continuity. For compact storage as well as efficient neighbor computation in the mesh, we present a pointerless representation of the SD-tree. An essential element of this approach is the development of a location code that enables efficient access and navigation of the data structure. For this purpose we introduce a location code, called an LPTcode, that uniquely encodes the geometry of each simplex of the hierarchy. We present rules to compute the neighbors of a given simplex efficiently through the use of this code. We show how to traverse the associated tree and how to answer point location and interpolation queries. Our algorithms work in arbitrary dimensions. We also demonstrate the use of the SD-tree for rendering atmospheric effects. We present empirical evidence that our methods can produce renderings of good quality significantly faster than simple ray-tracing

    Conformal n-dimensional bisection for local refinement of unstructured simplicial meshes

    Get PDF
    [English] In n-dimensional adaptive applications, conformal simplicial meshes must be lo cally modified. One systematic local modification is to bisect the prescribed simplices while surrounding simplices are bisected to ensure conformity. Although there are many conformal bisection strategies, practitioners prefer the method known as the newest vertex bisection. This method guarantees key advantages for adaptivity when ever the mesh has a structure called reflectivity. Unfortunately, it is not known (i) how to extract a reflection structure from any unstructured conformal mesh for three or more dimensions. Fortunately, a conformal bisection method is suitable for adap tivity if it almost fulfills the newest vertex bisection advantages. These advantages are almost met by an existent multi-stage strategy in three dimensions. However, it is not known (ii) how to perform multi-stage bisection for more than three dimensions. This thesis aims to demonstrate that n-dimensional conformal bisection is possible for local refinement of unstructured conformal meshes. To this end, it proposes the following contributions. First, it proposes the first 4-dimensional two-stage method, showing that multi-stage bisection is possible beyond three dimensions. Second, fol lowing this possibility, the thesis proposes the first n-dimensional multi-stage method, and thus, it answers question (ii). Third, it guarantees the first 3-dimensional method that features the newest vertex bisection advantages, showing that these advantages are possible beyond two dimensions. Fourth, extending this possibility, the thesis guarantees the first n-dimensional marking method that extracts a reflection struc ture from any unstructured conformal mesh, and thus, it answers question (i). This answer proves that local refinement with the newest vertex bisection is possible in any dimension. Fifth, this thesis shows that the proposed multi-stage method al most fulfills the advantages of the newest vertex bisection. Finally, to visualize four dimensional meshes, it proposes a simple tool to slice pentatopic meshes. In conclusion, this thesis demonstrates that conformal bisection is possible for local refinement in two or more dimensions. To this end, it proposes two novel methods for unstructured conformal meshes, methods that will enable adaptive applications on n-dimensional complex geometry. [Español] En aplicaciones adaptativas n-dimensionales, las mallas simpliciales conformes deben modificarse localmente. Una modificación local sistemática es bisecar los símplices prescritos mientras que los símplices circundantes se bisecan para garantizar la conformidad. Aunque existen muchas estrategias conformes de bisección, en aplicaciones prácticas se prefiere el método conocido como newest vertex bisection (NVB). Este método garantiza las propiedades deseadas para la adaptatividad siempre y cuando la malla tenga una estructura llamada reflectividad. Desafortunadamente, no se sabe (i) cómo extraer una estructura de reflexión de cualquier malla conforme no estructurada para tres o más dimensiones. Afortunadamente, un método de bisección conforme es adecuado para la adaptatividad si casi cumple con las propiedades de NVB. Estas propiedades son casi satisfechas por una estrategia existente de múltiples etapas en tres dimensiones. Sin embargo, no se sabe (ii) cómo realizar una bisección en múltiples etapas para más de tres dimensiones. Esta tesis tiene como objetivo demostrar que la bisección conforme n-dimensional es posible para el refinamiento local de mallas conformes no estructuradas. Para ello propone las siguientes aportaciones. Primero, propone el primer método de dos etapas de 4 dimensiones, que muestra que la bisección de múltiples etapas es posible en más de tres dimensiones. En segundo lugar, siguiendo esta posibilidad, la tesis propone el primer método n-dimensional de múltiples etapas y, por tanto, responde a la pregunta (ii). En tercer lugar, garantiza el primer método tridimensional que presenta las propiedades NVB, lo que demuestra que estas propiedades son posibles más allá de dos dimensiones. En cuarto lugar, ampliando esta posibilidad, la tesis garantiza el primer método de marcado n-dimensional que extrae una estructura de reflexión de cualquier malla conforme no estructurada y, por lo tanto, responde a la pregunta (i). Esta respuesta demuestra que el refinamiento local con NVB es posible en cualquier dimensión. Quinto, esta tesis muestra que el método de múltiples etapas propuesto casi cumple con las propiedades de NVB. Finalmente, para visualizar mallas de cuatro dimensiones, propone una herramienta simple para cortar mallas pentatópicas. En conclusión, esta tesis demuestra que la bisección conforme es posible para el refinamiento local en dos o más dimensiones. Con este fin, propone dos métodos novedosos para mallas conformes no estructuradas, métodos que harán posible aplicaciones adaptativas en geometría compleja n-dimensionalPostprint (published version

    Diamond-based models for scientific visualization

    Get PDF
    Hierarchical spatial decompositions are a basic modeling tool in a variety of application domains including scientific visualization, finite element analysis and shape modeling and analysis. A popular class of such approaches is based on the regular simplex bisection operator, which bisects simplices (e.g. line segments, triangles, tetrahedra) along the midpoint of a predetermined edge. Regular simplex bisection produces adaptive simplicial meshes of high geometric quality, while simplifying the extraction of crack-free, or conforming, approximations to the original dataset. Efficient multiresolution representations for such models have been achieved in 2D and 3D by clustering sets of simplices sharing the same bisection edge into structures called diamonds. In this thesis, we introduce several diamond-based approaches for scientific visualization. We first formalize the notion of diamonds in arbitrary dimensions in terms of two related simplicial decompositions of hypercubes. This enables us to enumerate the vertices, simplices, parents and children of a diamond. In particular, we identify the number of simplices involved in conforming updates to be factorial in the dimension and group these into a linear number of subclusters of simplices that are generated simultaneously. The latter form the basis for a compact pointerless representation for conforming meshes generated by regular simplex bisection and for efficiently navigating the topological connectivity of these meshes. Secondly, we introduce the supercube as a high-level primitive on such nested meshes based on the atomic units within the underlying triangulation grid. We propose the use of supercubes to associate information with coherent subsets of the full hierarchy and demonstrate the effectiveness of such a representation for modeling multiresolution terrain and volumetric datasets. Next, we introduce Isodiamond Hierarchies, a general framework for spatial access structures on a hierarchy of diamonds that exploits the implicit hierarchical and geometric relationships of the diamond model. We use an isodiamond hierarchy to encode irregular updates to a multiresolution isosurface or interval volume in terms of regular updates to diamonds. Finally, we consider nested hypercubic meshes, such as quadtrees, octrees and their higher dimensional analogues, through the lens of diamond hierarchies. This allows us to determine the relationships involved in generating balanced hypercubic meshes and to propose a compact pointerless representation of such meshes. We also provide a local diamond-based triangulation algorithm to generate high-quality conforming simplicial meshes

    Book of Abstracts of the Sixth SIAM Workshop on Combinatorial Scientific Computing

    Get PDF
    Book of Abstracts of CSC14 edited by Bora UçarInternational audienceThe Sixth SIAM Workshop on Combinatorial Scientific Computing, CSC14, was organized at the Ecole Normale Supérieure de Lyon, France on 21st to 23rd July, 2014. This two and a half day event marked the sixth in a series that started ten years ago in San Francisco, USA. The CSC14 Workshop's focus was on combinatorial mathematics and algorithms in high performance computing, broadly interpreted. The workshop featured three invited talks, 27 contributed talks and eight poster presentations. All three invited talks were focused on two interesting fields of research specifically: randomized algorithms for numerical linear algebra and network analysis. The contributed talks and the posters targeted modeling, analysis, bisection, clustering, and partitioning of graphs, applied in the context of networks, sparse matrix factorizations, iterative solvers, fast multi-pole methods, automatic differentiation, high-performance computing, and linear programming. The workshop was held at the premises of the LIP laboratory of ENS Lyon and was generously supported by the LABEX MILYON (ANR-10-LABX-0070, Université de Lyon, within the program ''Investissements d'Avenir'' ANR-11-IDEX-0007 operated by the French National Research Agency), and by SIAM

    Sparse Grid Methods for Higher Dimensional Approximation

    Get PDF
    Diese Arbeit befasst sich mit Dünngitterverfahren zur Lösung von höherdimensionalen Problemen. Sie zeigt drei neue Aspekte von Dünnen Gittern auf: Erweiterungen der elementaren Werkzeuge zur Arbeit mit Dünnen Gittern, eine Analyse von sowohl inhärenten Einschränkungen als auch Vorteilen von Dünnen Gittern speziell für die Anwendung zur Dichteapproximation (Fokker--Planck--Gleichung) sowie einen neuen Ansatz zur dimensions- und ortsadaptiven Darstellung von Funktionen effektiv niedriger Dimension. Der erste Beitrag beinhaltet die erste (dem Autor bekannte) Fehlerschranke für inhomogene Randbedingungen bei Dünngitterapproximation und eine erweiterte Operationsbibliothek zur Durchführung von Addition, Multiplikation und Hintereinanderausführung von Dünngitterdarstellungen sowie einen adaptiven Kollokationsansatz für approximative Integraltransformationen mit beliebigen Kernen. Die Analyse verwendet Konditionszahlen für den Datenfehler und verallgemeinert damit die bisher bekannten Abschätzungen aus der Literatur. Ferner wird erstmals auch der Konsistenzfehler bei derartigen Operationen berücksichtigt sowie eine adaptive Methode zur Kontrolle desselben vorgeschlagen, die insbesondere zuvor vorhandene Schwachstellen behebt und die Methode verlässlich macht. Der zweite Beitrag ist eine Untersuchung von dimensionsabhängigen Kosten/Nutzen-Koeffizienten, wie sie bei der Lösung von Fokker--Planck--Gleichungen und der damit verbundenen Approximation von Wahrscheinlichkeitsdichten auftreten. Es werden sowohl theoretische Schranken als auch A-posteriori-Fehlermessungen anhand einer repräsentativen Fallstudie für lineare Fokker--Planck--Gleichungen und der Normalverteilung auf Rd vorgestellt und die auftretenden dimensionsabhängigen Koeffizienten bei Interpolation und Bestapproximation (sowohl L2 als auch beim Lösen der Gleichung mittels Galerkin-Verfahren) untersucht. Dabei stehen reguläre Dünne Gitter, adaptive Dünne Gitter und die speziell für die Energienorm optimierten Dünnen Gitter im Vordergrund. Insbesondere werden Schlussfolgerungen auf inhärente Einschränkungen aber auch auf Vorteile gegenüber klassischen Vollgitterverfahren diskutiert. Der dritte Beitrag dieser Arbeit ist der erste Ansatz für dimensionsadaptive Verfeinerung, der insbesondere für Approximationsprobleme konzipiert wurde. Der Ansatz behebt bekannte Schwierigkeiten mit frühzeitiger Terminierung, wie sie bei bisherigen Ansätzen zur Verallgemeinerung der erfolgreichen Dimensionsadaptivität aus dem Bereich Dünngitterquadratur zu beobachten waren. Das Verfahren erlaubt eine systematische Reduktion der Freiheitsgrade für Funktionen, die effektiv nur von wenigen (Teilmengen von) Koordinaten abhängen. Der Ansatz kombiniert die erfolgreiche ortsadaptive Dünngittertechnik aus dem Bereich der Approximation mit der ebenfalls erfolgreichen dimensionsadaptiven Verfeinerung aus dem Bereich der Dünngitterquadratur. Die Abhängigkeit von unterschiedlichen (Teilmengen von) Koordinaten wird mittels gewichteter Räume unter Zuhilfenahme der ANOVA-Zerlegung durchgeführt. Die Arbeit stellt neue a priori optimierte Dünngitterräume vor, die optimale Approximation für Funktionenräume mit gewichteten gemischten zweiten Ableitungen und bekannten Gewichten erlauben. Die Konstruktion liefert die bekannten regulären Dünnen Gitter mit gewichtsabhängigen Leveln für jede Teilmenge von Koordinaten (ANOVA Komponenten). Für unbekannte Gewichte wird eine neue a-posteriori dimensionsadaptive Methode vorgestellt, die im Unterschied zu bekannten Verfahren explizit ANOVA Komponenten ermittelt und berücksichtigt und so höhere Verlässlichkeit beim Einsatz für Approximationsanwendungen erzielt. Neben reiner dimensionsadaptiver Approximation erlaubt das Verfahren auch erstmals gekoppelte orts- und dimensionsadaptive Verfeinerung. Die Arbeit stellt die Methodik dar und verifiziert die Verlässlichkeit anhand dimensionsadaptiver Interpolation und dimensionsadaptiver Lösung partieller Differentialgleichungen./td

    Locally optimal Delaunay-refinement and optimisation-based mesh generation

    Get PDF
    The field of mesh generation concerns the development of efficient algorithmic techniques to construct high-quality tessellations of complex geometrical objects. In this thesis, I investigate the problem of unstructured simplicial mesh generation for problems in two- and three-dimensional spaces, in which meshes consist of collections of triangular and tetrahedral elements. I focus on the development of efficient algorithms and computer programs to produce high-quality meshes for planar, surface and volumetric objects of arbitrary complexity. I develop and implement a number of new algorithms for mesh construction based on the Frontal-Delaunay paradigm - a hybridisation of conventional Delaunay-refinement and advancing-front techniques. I show that the proposed algorithms are a significant improvement on existing approaches, typically outperforming the Delaunay-refinement technique in terms of both element shape- and size-quality, while offering significantly improved theoretical robustness compared to advancing-front techniques. I verify experimentally that the proposed methods achieve the same element shape- and size-guarantees that are typically associated with conventional Delaunay-refinement techniques. In addition to mesh construction, methods for mesh improvement are also investigated. I develop and implement a family of techniques designed to improve the element shape quality of existing simplicial meshes, using a combination of optimisation-based vertex smoothing, local topological transformation and vertex insertion techniques. These operations are interleaved according to a new priority-based schedule, and I show that the resulting algorithms are competitive with existing state-of-the-art approaches in terms of mesh quality, while offering significant improvements in computational efficiency. Optimised C++ implementations for the proposed mesh generation and mesh optimisation algorithms are provided in the JIGSAW and JITTERBUG software libraries

    Geometric and Algebraic Combinatorics

    Get PDF
    The 2015 Oberwolfach meeting “Geometric and Algebraic Combinatorics” was organized by Gil Kalai (Jerusalem), Isabella Novik (Seattle), Francisco Santos (Santander), and Volkmar Welker (Marburg). It covered a wide variety of aspects of Discrete Geometry, Algebraic Combinatorics with geometric flavor, and Topological Combinatorics. Some of the highlights of the conference included (1) counterexamples to the topological Tverberg conjecture, and (2) the latest results around the Heron-Rota-Welsh conjecture

    Computational and Theoretical Issues of Multiparameter Persistent Homology for Data Analysis

    Get PDF
    The basic goal of topological data analysis is to apply topology-based descriptors to understand and describe the shape of data. In this context, homology is one of the most relevant topological descriptors, well-appreciated for its discrete nature, computability and dimension independence. A further development is provided by persistent homology, which allows to track homological features along a oneparameter increasing sequence of spaces. Multiparameter persistent homology, also called multipersistent homology, is an extension of the theory of persistent homology motivated by the need of analyzing data naturally described by several parameters, such as vector-valued functions. Multipersistent homology presents several issues in terms of feasibility of computations over real-sized data and theoretical challenges in the evaluation of possible descriptors. The focus of this thesis is in the interplay between persistent homology theory and discrete Morse Theory. Discrete Morse theory provides methods for reducing the computational cost of homology and persistent homology by considering the discrete Morse complex generated by the discrete Morse gradient in place of the original complex. The work of this thesis addresses the problem of computing multipersistent homology, to make such tool usable in real application domains. This requires both computational optimizations towards the applications to real-world data, and theoretical insights for finding and interpreting suitable descriptors. Our computational contribution consists in proposing a new Morse-inspired and fully discrete preprocessing algorithm. We show the feasibility of our preprocessing over real datasets, and evaluate the impact of the proposed algorithm as a preprocessing for computing multipersistent homology. A theoretical contribution of this thesis consists in proposing a new notion of optimality for such a preprocessing in the multiparameter context. We show that the proposed notion generalizes an already known optimality notion from the one-parameter case. Under this definition, we show that the algorithm we propose as a preprocessing is optimal in low dimensional domains. In the last part of the thesis, we consider preliminary applications of the proposed algorithm in the context of topology-based multivariate visualization by tracking critical features generated by a discrete gradient field compatible with the multiple scalar fields under study. We discuss (dis)similarities of such critical features with the state-of-the-art techniques in topology-based multivariate data visualization
    corecore