123 research outputs found

    A Density Control Based Adaptive Hexahedral Mesh Generation Algorithm

    Get PDF
    A density control based adaptive hexahedral mesh generation algorithm for three dimensional models is presented in this paper. The first step of this algorithm is to identify the characteristic boundary of the solid model which needs to be meshed. Secondly, the refinement fields are constructed and modified according to the conformal refinement templates, and used as a metric to generate an initial grid structure. Thirdly, a jagged core mesh is generated by removing all the elements in the exterior of the solid model. Fourthly, all of the surface nodes of the jagged core mesh are matching to the surfaces of the model through a node projection process. Finally, the mesh quality such as topology and shape is improved by using corresponding optimization techniques

    Parallel octree-based hexahedral mesh generation for eulerian to lagrangian conversion.

    Full text link

    HexBox: Interactive Box Modeling of Hexahedral Meshes

    Get PDF
    We introduce HexBox, an intuitive modeling method and interactive tool for creating and editing hexahedral meshes. Hexbox brings the major and widely validated surface modeling paradigm of surface box modeling into the world of hex meshing. The main idea is to allow the user to box-model a volumetric mesh by primarily modifying its surface through a set of topological and geometric operations. We support, in particular, local and global subdivision, various instantiations of extrusion, removal, and cloning of elements, the creation of non-conformal or conformal grids, as well as shape modifications through vertex positioning, including manual editing, automatic smoothing, or, eventually, projection on an externally-provided target surface. At the core of the efficient implementation of the method is the coherent maintenance, at all steps, of two parallel data structures: a hexahedral mesh representing the topology and geometry of the currently modeled shape, and a directed acyclic graph that connects operation nodes to the affected mesh hexahedra. Operations are realized by exploiting recent advancements in grid- based meshing, such as mixing of 3-refinement, 2-refinement, and face-refinement, and using templated topological bridges to enforce on-the-fly mesh conformity across pairs of adjacent elements. A direct manipulation user interface lets users control all operations. The effectiveness of our tool, released as open source to the community, is demonstrated by modeling several complex shapes hard to realize with competing tools and techniques

    Admittance Method for Estimating Local Field Potentials Generated in a Multi-Scale Neuron Model of the Hippocampus

    Get PDF
    Significant progress has been made toward model-based prediction of neral tissue activation in response to extracellular electrical stimulation, but challenges remain in the accurate and efficient estimation of distributed local field potentials (LFP). Analytical methods of estimating electric fields are a first-order approximation that may be suitable for model validation, but they are computationally expensive and cannot accurately capture boundary conditions in heterogeneous tissue. While there are many appropriate numerical methods of solving electric fields in neural tissue models, there isn\u27t an established standard for mesh geometry nor a well-known rule for handling any mismatch in spatial resolution. Moreover, the challenge of misalignment between current sources and mesh nodes in a finite-element or resistor-network method volume conduction model needs to be further investigated. Therefore, using a previously published and validated multi-scale model of the hippocampus, the authors have formulated an algorithm for LFP estimation, and by extension, bidirectional communication between discretized and numerically solved volume conduction models and biologically detailed neural circuit models constructed in NEURON. Development of this algorithm required that we assess meshes of (i) unstructured tetrahedral and grid-based hexahedral geometries as well as (ii) differing approaches for managing the spatial misalignment of current sources and mesh nodes. The resulting algorithm is validated through the comparison of Admittance Method predicted evoked potentials with analytically estimated LFPs. Establishing this method is a critical step toward closed-loop integration of volume conductor and NEURON models that could lead to substantial improvement of the predictive power of multi-scale stimulation models of cortical tissue. These models may be used to deepen our understanding of hippocampal pathologies and the identification of efficacious electroceutical treatments

    Doctor of Philosophy

    Get PDF
    dissertationOne of the fundamental building blocks of many computational sciences is the construction and use of a discretized, geometric representation of a problem domain, often referred to as a mesh. Such a discretization enables an otherwise complex domain to be represented simply, and computation to be performed over that domain with a finite number of basis elements. As mesh generation techniques have become more sophisticated over the years, focus has largely shifted to quality mesh generation techniques that guarantee or empirically generate numerically well-behaved elements. In this dissertation, the two complementary meshing subproblems of vertex placement and element creation are analyzed, both separately and together. First, a dynamic particle system achieves adaptivity over domains by inferring feature size through a new information passing algorithm. Second, a new tetrahedral algorithm is constructed that carefully combines lattice-based stenciling and mesh warping to produce guaranteed quality meshes on multimaterial volumetric domains. Finally, the ideas of lattice cleaving and dynamic particle systems are merged into a unified framework for producing guaranteed quality, unstructured and adaptive meshing of multimaterial volumetric domains

    Performance Portable Solid Mechanics via Matrix-Free pp-Multigrid

    Full text link
    Finite element analysis of solid mechanics is a foundational tool of modern engineering, with low-order finite element methods and assembled sparse matrices representing the industry standard for implicit analysis. We use performance models and numerical experiments to demonstrate that high-order methods greatly reduce the costs to reach engineering tolerances while enabling effective use of GPUs. We demonstrate the reliability, efficiency, and scalability of matrix-free pp-multigrid methods with algebraic multigrid coarse solvers through large deformation hyperelastic simulations of multiscale structures. We investigate accuracy, cost, and execution time on multi-node CPU and GPU systems for moderate to large models using AMD MI250X (OLCF Crusher), NVIDIA A100 (NERSC Perlmutter), and V100 (LLNL Lassen and OLCF Summit), resulting in order of magnitude efficiency improvements over a broad range of model properties and scales. We discuss efficient matrix-free representation of Jacobians and demonstrate how automatic differentiation enables rapid development of nonlinear material models without impacting debuggability and workflows targeting GPUs

    Analysis and Generation of Quality Polytopal Meshes with Applications to the Virtual Element Method

    Get PDF
    This thesis explores the concept of the quality of a mesh, the latter being intended as the discretization of a two- or three- dimensional domain. The topic is interdisciplinary in nature, as meshes are massively used in several fields from both the geometry processing and the numerical analysis communities. The goal is to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. In other words, a good quality mesh that is also cheap to handle, overcoming the typical trade-off between quality and computational cost. To reach this goal, we first need to answer the question: ''How, and how much, does the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depend on the particular mesh adopted to model the problem? And which geometrical features of the mesh most influence the result?'' We present a comparative study of the different mesh types, mesh generation techniques, and mesh quality measures currently available in the literature related to both engineering and computer graphics applications. This analysis leads to the precise definition of the notion of quality for a mesh, in the particular context of numerical simulations of partial differential equations with the virtual element method, and the consequent construction of criteria to determine and optimize the quality of a given mesh. Our main contribution consists in a new mesh quality indicator for polytopal meshes, able to predict the performance of the virtual element method over a particular mesh before running the simulation. Strictly related to this, we also define a quality agglomeration algorithm that optimizes the quality of a mesh by wisely agglomerating groups of neighboring elements. The accuracy and the reliability of both tools are thoroughly verified in a series of tests in different scenarios

    6th International Meshing Roundtable '97

    Full text link
    corecore