30 research outputs found

    Flipping Cubical Meshes

    Full text link
    We define and examine flip operations for quadrilateral and hexahedral meshes, similar to the flipping transformations previously used in triangular and tetrahedral mesh generation.Comment: 20 pages, 24 figures. Expanded journal version of paper from 10th International Meshing Roundtable. This version removes some unwanted paragraph breaks from the previous version; the text is unchange

    Finding Hexahedrizations for Small Quadrangulations of the Sphere

    Full text link
    This paper tackles the challenging problem of constrained hexahedral meshing. An algorithm is introduced to build combinatorial hexahedral meshes whose boundary facets exactly match a given quadrangulation of the topological sphere. This algorithm is the first practical solution to the problem. It is able to compute small hexahedral meshes of quadrangulations for which the previously known best solutions could only be built by hand or contained thousands of hexahedra. These challenging quadrangulations include the boundaries of transition templates that are critical for the success of general hexahedral meshing algorithms. The algorithm proposed in this paper is dedicated to building combinatorial hexahedral meshes of small quadrangulations and ignores the geometrical problem. The key idea of the method is to exploit the equivalence between quad flips in the boundary and the insertion of hexahedra glued to this boundary. The tree of all sequences of flipping operations is explored, searching for a path that transforms the input quadrangulation Q into a new quadrangulation for which a hexahedral mesh is known. When a small hexahedral mesh exists, a sequence transforming Q into the boundary of a cube is found; otherwise, a set of pre-computed hexahedral meshes is used. A novel approach to deal with the large number of problem symmetries is proposed. Combined with an efficient backtracking search, it allows small shellable hexahedral meshes to be found for all even quadrangulations with up to 20 quadrangles. All 54,943 such quadrangulations were meshed using no more than 72 hexahedra. This algorithm is also used to find a construction to fill arbitrary domains, thereby proving that any ball-shaped domain bounded by n quadrangles can be meshed with no more than 78 n hexahedra. This very significantly lowers the previous upper bound of 5396 n.Comment: Accepted for SIGGRAPH 201

    On orienting edges of unstructured two- and three-dimensional meshes

    Full text link
    Finite element codes typically use data structures that represent unstructured meshes as collections of cells, faces, and edges, each of which require associated coordinate systems. One then needs to store how the coordinate system of each edge relates to that of neighboring cells. On the other hand, we can simplify data structures and algorithms if we can a priori orient coordinate systems in such a way that the coordinate systems on the edges follows uniquely from those on the cells \textit{by rule}. Such rules require that \textit{every} unstructured mesh allows assigning directions to edges that satisfy the convention in adjacent cells. We show that the convention chosen for unstructured quadrilateral meshes in the \texttt{deal.II} library always allows to orient meshes. It can therefore be used to make codes simpler, faster, and less bug prone. We present an algorithm that orients meshes in O(N)O(N) operations. We then show that consistent orientations are not always possible for 3d hexahedral meshes. Thus, cells generally need to store the direction of adjacent edges, but our approach also allows the characterization of cases where this is not necessary. The 3d extension of our algorithm either orients edges consistently, or aborts, both within O(N)O(N) steps

    Capturing Shape Information with Multi-Scale Topological Loss Terms for 3D Reconstruction

    Full text link
    Reconstructing 3D objects from 2D images is both challenging for our brains and machine learning algorithms. To support this spatial reasoning task, contextual information about the overall shape of an object is critical. However, such information is not captured by established loss terms (e.g. Dice loss). We propose to complement geometrical shape information by including multi-scale topological features, such as connected components, cycles, and voids, in the reconstruction loss. Our method uses cubical complexes to calculate topological features of 3D volume data and employs an optimal transport distance to guide the reconstruction process. This topology-aware loss is fully differentiable, computationally efficient, and can be added to any neural network. We demonstrate the utility of our loss by incorporating it into SHAPR, a model for predicting the 3D cell shape of individual cells based on 2D microscopy images. Using a hybrid loss that leverages both geometrical and topological information of single objects to assess their shape, we find that topological information substantially improves the quality of reconstructions, thus highlighting its ability to extract more relevant features from image datasets.Comment: Accepted at the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI

    A 36-Element Solution To Schneiders' Pyramid Hex-Meshing Problem And A Parity-Changing Template For Hex-Mesh Revision

    Full text link
    In this paper, we present a solution that uses the least number of hexahedra to build a pyramid, which is the key block required for one type of automatic hex-meshing method to be successful. When the initial result of a hex-meshing program is not appropriate for specific applications, some templates are used for revision. The templates reported thus far are parity-preserving, which means that the parity of the number of hexahedra in a mesh is unchanged after a revision following the templates. We present a parity-changing template that makes the template set integral and more effective. These two findings are obtained by a program that we developed for this study, which is a tool for researchers to observe the characteristics of small hexahedral packings.Comment: 7 pages, 12 figure

    IST Austria Thesis

    Get PDF
    This thesis considers two examples of reconfiguration problems: flipping edges in edge-labelled triangulations of planar point sets and swapping labelled tokens placed on vertices of a graph. In both cases the studied structures – all the triangulations of a given point set or all token placements on a given graph – can be thought of as vertices of the so-called reconfiguration graph, in which two vertices are adjacent if the corresponding structures differ by a single elementary operation – by a flip of a diagonal in a triangulation or by a swap of tokens on adjacent vertices, respectively. We study the reconfiguration of one instance of a structure into another via (shortest) paths in the reconfiguration graph. For triangulations of point sets in which each edge has a unique label and a flip transfers the label from the removed edge to the new edge, we prove a polynomial-time testable condition, called the Orbit Theorem, that characterizes when two triangulations of the same point set lie in the same connected component of the reconfiguration graph. The condition was first conjectured by Bose, Lubiw, Pathak and Verdonschot. We additionally provide a polynomial time algorithm that computes a reconfiguring flip sequence, if it exists. Our proof of the Orbit Theorem uses topological properties of a certain high-dimensional cell complex that has the usual reconfiguration graph as its 1-skeleton. In the context of token swapping on a tree graph, we make partial progress on the problem of finding shortest reconfiguration sequences. We disprove the so-called Happy Leaf Conjecture and demonstrate the importance of swapping tokens that are already placed at the correct vertices. We also prove that a generalization of the problem to weighted coloured token swapping is NP-hard on trees but solvable in polynomial time on paths and stars

    Persistent-Homology-based Machine Learning and its Applications -- A Survey

    Full text link
    A suitable feature representation that can both preserve the data intrinsic information and reduce data complexity and dimensionality is key to the performance of machine learning models. Deeply rooted in algebraic topology, persistent homology (PH) provides a delicate balance between data simplification and intrinsic structure characterization, and has been applied to various areas successfully. However, the combination of PH and machine learning has been hindered greatly by three challenges, namely topological representation of data, PH-based distance measurements or metrics, and PH-based feature representation. With the development of topological data analysis, progresses have been made on all these three problems, but widely scattered in different literatures. In this paper, we provide a systematical review of PH and PH-based supervised and unsupervised models from a computational perspective. Our emphasizes are the recent development of mathematical models and tools, including PH softwares and PH-based functions, feature representations, kernels, and similarity models. Essentially, this paper can work as a roadmap for the practical application of PH-based machine learning tools. Further, we consider different topological feature representations in different machine learning models, and investigate their impacts on the protein secondary structure classification.Comment: 42 pages; 6 figures; 9 table

    Dense point sets have sparse Delaunay triangulations

    Full text link
    The spread of a finite set of points is the ratio between the longest and shortest pairwise distances. We prove that the Delaunay triangulation of any set of n points in R^3 with spread D has complexity O(D^3). This bound is tight in the worst case for all D = O(sqrt{n}). In particular, the Delaunay triangulation of any dense point set has linear complexity. We also generalize this upper bound to regular triangulations of k-ply systems of balls, unions of several dense point sets, and uniform samples of smooth surfaces. On the other hand, for any n and D=O(n), we construct a regular triangulation of complexity Omega(nD) whose n vertices have spread D.Comment: 31 pages, 11 figures. Full version of SODA 2002 paper. Also available at http://www.cs.uiuc.edu/~jeffe/pubs/screw.htm

    Random lattice particle modeling of fracture processes in cementitious materials

    Get PDF
    The capability of representing fracture processes in non-homogeneous media is of great interest among the scientific community for at least two reasons: the first one stems from the fact that the use of composite materials is ubiquitous within structural applications, since the advantages of the constituents can be exploited to improve material performance; the second consists of the need to assess the non-linear post-peak behavior of such structures to properly determine margins of safety with respect to strong excitations (e.g. earthquakes, blast or impact loadings). Different kinds of theories and methodologies have been developed in the last century in order to model such phenomena, starting from linear elastic equivalent methods, then moving to plastic theories and fracture mechanics. Among the different modeling techniques available, in recent years lattice models have established themselves as a powerful tool for simulating failure modes and crack paths in heterogeneous materials. The basic idea dates back to the pioneeristic work of Hrennikoff: a continuum medium can be modeled through the interaction of unidimensional elements (e.g. springs or beams) spatially arranged in different ways. The set of nodes that interconnect the elements can be regularly or irregularly placed inside the domain, leading to regular or random lattices. It has been shown that lattices with regular geometry can strongly bias the direction of cracking, leading to incorrect results. A variety of lattice models have been developed. Such models have seen a wide field of applications, ranging from aerodynamics (using Lattice-Boltzman models) to heat transfer, crystallography and many others. Every material used in civil and infrastructure engineering is constituted of different phases. This is due to the fact that the different features of different elements are usually coupled in order to obtain greater advantages with respect to the original constituents. Even structural steel, which is usually thought of as a homogeneous continuum-type medium, includes carbon particles that can be seen as inhomogeneities at the microscopic level. The mechanical behavior of concrete, which is the main object of the present work, is strongly affected not only by the presence of inclusions (i.e. the aggregates pieces) but also by their arrangement. For this reason, the explicit, statistical representation of their presence is of great interest in the simulations of concrete behavior. Lattice models can directly account for the presence of different phases, and so are advantageous from this perspective. The definition of such models, their implementation in a computer program, together with validation on laboratory tests will be presented. The present work will briefly review the state of the art and the basic principles of these models, starting from the geometrical and computing tools needed to build the simulations. The implementation of this technique in the Matlab environment will be presented, highlighting the theoretical background. The numerical results will be validated based on two complementary experimental campaigns,which focused on the meso- and macro-scales of concrete. Whereas the aim of this work is the representation of the quasi-brittle fracture processes in cementitious materials such as concrete, the discussed approach is general, and therefore valid for the representation of damage and crack growth in a variety of different materials

    Semantic models for texturing volume objects

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore