3,554 research outputs found

    GPU-based volume visualization from high-order finite element fields

    Get PDF
    pre-printThis paper describes a new volume rendering system for spectral/hp finite-element methods that has as its goal to be both accurate and interactive. Even though high-order finite element methods are commonly used by scientists and engineers, there are few visualization methods designed to display this data directly. Consequently, visualizations of high-order data are generally created by first sampling the high-order field onto a regular grid and then generating the visualization via traditional methods based on linear interpolation. This approach, however, introduces error into the visualization pipeline and requires the user to balance image quality, interactivity, and resource consumption. We first show that evaluation of the volume rendering integral, when applied to the composition of piecewise-smooth transfer functions with the high-order scalar field, typically exhibits second-order convergence for a wide range of high-order quadrature schemes, and has worst case first-order convergence. This result provides bounds on the ability to achieve high-order convergence to the volume rendering integral. We then develop an algorithm for optimized evaluation of the volume rendering integral, based on the categorization of each ray according to the local behavior of the field and transfer function. We demonstrate the effectiveness of our system by running performance benchmarks on several high-order fluid-flow simulations

    Direct numerical simulation of complex viscoelastic flows via fast lattice-Boltzmann solution of the Fokker–Planck equation

    Get PDF
    Micro–macro simulations of polymeric solutions rely on the coupling between macroscopic conservation equations for the fluid flow and stochastic differential equations for kinetic viscoelastic models at the microscopic scale. In the present work we introduce a novel micro–macro numerical approach, where the macroscopic equations are solved by a finite-volume method and the microscopic equation by a lattice-Boltzmann one. The kinetic model is given by molecular analogy with a finitely extensible non-linear elastic (FENE) dumbbell and is deterministically solved through an equivalent Fokker–Planck equation. The key features of the proposed approach are: (i) a proper scaling and coupling between the micro lattice-Boltzmann solution and the macro finite-volume one; (ii) a fast microscopic solver thanks to an implementation for Graphic Processing Unit (GPU) and the local adaptivity of the lattice-Boltzmann mesh; (iii) an operator-splitting algorithm for the convection of the macroscopic viscoelastic stresses instead of the whole probability density of the dumbbell configuration. This latter feature allows the application of the proposed method to non-homogeneous flow conditions with low memory-storage requirements. The model optimization is achieved through an extensive analysis of the lattice-Boltzmann solution, which finally provides control on the numerical error and on the computational time. The resulting micro–macro model is validated against the benchmark problem of a viscoelastic flow past a confined cylinder and the results obtained confirm the validity of the approach

    Fast, Scalable, and Interactive Software for Landau-de Gennes Numerical Modeling of Nematic Topological Defects

    Get PDF
    Numerical modeling of nematic liquid crystals using the tensorial Landau-de Gennes (LdG) theory provides detailed insights into the structure and energetics of the enormous variety of possible topological defect configurations that may arise when the liquid crystal is in contact with colloidal inclusions or structured boundaries. However, these methods can be computationally expensive, making it challenging to predict (meta)stable configurations involving several colloidal particles, and they are often restricted to system sizes well below the experimental scale. Here we present an open-source software package that exploits the embarrassingly parallel structure of the lattice discretization of the LdG approach. Our implementation, combining CUDA/C++ and OpenMPI, allows users to accelerate simulations using both CPU and GPU resources in either single- or multiple-core configurations. We make use of an efficient minimization algorithm, the Fast Inertial Relaxation Engine (FIRE) method, that is well-suited to large-scale parallelization, requiring little additional memory or computational cost while offering performance competitive with other commonly used methods. In multi-core operation we are able to scale simulations up to supra-micron length scales of experimental relevance, and in single-core operation the simulation package includes a user-friendly GUI environment for rapid prototyping of interfacial features and the multifarious defect states they can promote. To demonstrate this software package, we examine in detail the competition between curvilinear disclinations and point-like hedgehog defects as size scale, material properties, and geometric features are varied. We also study the effects of an interface patterned with an array of topological point-defects.Comment: 16 pages, 6 figures, 1 youtube link. The full catastroph

    Doctor of Philosophy

    Get PDF
    dissertationHigh-order finite element methods, using either the continuous or discontinuous Galerkin formulation, are becoming more popular in fields such as fluid mechanics, solid mechanics and computational electromagnetics. While the use of these methods is becoming increasingly common, there has not been a corresponding increase in the availability and use of visualization methods and software that are capable of displaying visualizations of these volumes both accurately and interactively. A fundamental problem with the majority of existing visualization techniques is that they do not understand nor respect the structure of a high-order field, leading to visualization error. Visualizations of high-order fields are generally created by first approximating the field with low-order primitives and then generating the visualization using traditional methods based on linear interpolation. The approximation step introduces error into the visualization pipeline, which requires the user to balance the competing goals of image quality, interactivity and resource consumption. In practice, visualizations performed this way are often either undersampled, leading to visualization error, or oversampled, leading to unnecessary computational effort and resource consumption. Without an understanding of the sources of error, the simulation scientist is unable to determine if artifacts in the image are due to visualization error, insufficient mesh resolution, or a failure in the underlying simulation. This uncertainty makes it difficult for the scientists to make judgments based on the visualization, as judgments made on the assumption that artifacts are a result of visualization error when they are actually a more fundamental problem can lead to poor decision-making. This dissertation presents new visualization algorithms that use the high-order data in its native state, using the knowledge of the structure and mathematical properties of these fields to create accurate images interactively, while avoiding the error introduced by representing the fields with low-order approximations. First, a new algorithm for cut-surfaces is presented, specifically the accurate depiction of colormaps and contour lines on arbitrarily complex cut-surfaces. Second, a mathematical analysis of the evaluation of the volume rendering integral through a high-order field is presented, as well as an algorithm that uses this analysis to create accurate volume renderings. Finally, a new software system, the Element Visualizer (ElVis), is presented, which combines the ideas and algorithms created in this dissertation in a single software package that can be used by simulation scientists to create accurate visualizations. This system was developed and tested with the assistance of the ProjectX simulation team. The utility of our algorithms and visualization system are then demonstrated with examples from several high-order fluid flow simulations
    corecore