44 research outputs found

    Stochastic Volume Rendering of Multi-Phase SPH Data

    Get PDF
    In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering

    A Survey of GPU-Based Large-Scale Volume Visualization

    Get PDF
    This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent outputsensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.Engineering and Applied Science

    Efficient Liquid Animation: New Discretizations for Spatially Adaptive Liquid Viscosity and Reduced-Model Two-Phase Bubbles and Inviscid Liquids

    Get PDF
    The work presented in this thesis focuses on improving the computational efficiency when simulating viscous liquids and air bubbles immersed in liquids by designing new discretizations to focus computational effort in regions that meaningfully contribute to creating realistic motion. For example, when simulating air bubbles rising through a liquid, the entire bubble volume is traditionally simulated despite the bubble’s interior being visually unimportant. We propose our constraint bubbles model to avoid simulating the interior of the bubble volume by reformulating the usual incompressibility constraint throughout a bubble volume as a constraint over only the bubble’s surface. Our constraint method achieves qualitatively similar results compared to a two-phase simulation ground-truth for bubbles with low densities (e.g., air bubbles in water). For bubbles with higher densities, we propose our novel affine regions to model the bubble’s entire velocity field with a single affine vector field. We demonstrate that affine regions can correctly achieve hydrostatic equilibrium for bubble densities that match the surrounding liquid and correctly sink for higher densities. Finally, we introduce a tiled approach to subdivide large-scale affine regions into smaller subregions. Using this strategy, we are able to accelerate single-phase free surface flow simulations, offering a novel approach to adaptively enforce incompressibility in free surface liquids without complex data structures. While pressure forces are often the bottleneck for inviscid fluid simulations, viscosity can impose orders of magnitude greater computational costs. We observed that viscous liquids require high simulation resolution at the surface to capture detailed viscous buckling and rotational motion but, because viscosity dampens relative motion, do not require the same resolution in the liquid’s interior. We therefore propose a novel adaptive method to solve free surface viscosity equations by discretizing the variational finite difference approach of Batty and Bridson (2008) on an octree grid. Our key insight is that the variational method guarantees a symmetric positive definite linear system by construction, allowing the use of fast numerical solvers like the Conjugate Gradients method. By coarsening simulation grid cells inside the liquid volume, we rapidly reduce the degrees-of-freedom in the viscosity linear system up to a factor of 7.7x and achieve performance improvements for the linear solve between 3.8x and 9.4x compared to a regular grid equivalent. The results of our adaptive method closely match an equivalent regular grid for common scenarios such as: rotation and bending, buckling and folding, and solid-liquid interactions

    Void-and-Cluster Sampling of Large Scattered Data and Trajectories

    Full text link
    We propose a data reduction technique for scattered data based on statistical sampling. Our void-and-cluster sampling technique finds a representative subset that is optimally distributed in the spatial domain with respect to the blue noise property. In addition, it can adapt to a given density function, which we use to sample regions of high complexity in the multivariate value domain more densely. Moreover, our sampling technique implicitly defines an ordering on the samples that enables progressive data loading and a continuous level-of-detail representation. We extend our technique to sample time-dependent trajectories, for example pathlines in a time interval, using an efficient and iterative approach. Furthermore, we introduce a local and continuous error measure to quantify how well a set of samples represents the original dataset. We apply this error measure during sampling to guide the number of samples that are taken. Finally, we use this error measure and other quantities to evaluate the quality, performance, and scalability of our algorithm.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics as a special issue from the proceedings of VIS 201

    Visuelle Analyse großer Partikeldaten

    Get PDF
    Partikelsimulationen sind eine bewĂ€hrte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der KraftstoffzerstĂ€ubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich ĂŒber die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher DatensĂ€tze sowie der zugrundeliegenden Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulĂ€ren Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein regulĂ€res eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. DarĂŒber hinaus fĂŒhrt diese Konversion meist zu einem Verlust der PrĂ€zision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten

    Peridynamic Modeling of Dynamic Fracture in Bio-Inspired Structures for High Velocity Impacts

    Get PDF
    Bio-inspired damage resistant models have distinct patterns like brick-mortar, Voronoi, helicoidal etc., which show exceptional damage mitigation against high-velocity impacts. These unique patterns increase damage resistance (in some cases up to 3000 times more than the constituent materials) by effectively dispersing the stress waves produced by the impact. Ability to mimic these structures on a larger scale can be ground-breaking and could be used in numerous applications. Advancements in 3D printing have now made possible fabrication of these patterns with ease and at a low cost. Research on dynamic fracture in bio-inspired structures is very limited but it is crucial for the development of such materials with enhanced impact resistance. In this thesis, we investigate damage in some bio-inspired structures through peridynamic modeling. We first print a 3D brick-mortar structure, 82% VeroClear plastic (a PMMA substitute in 3D printing; the stiff phase) and 18% TangoBlack rubber (a natural rubber substitute in 3D printing; the soft phase). We investigate damage in this 3D printed sample by low-velocity drop test with fixed and free boundary conditions. Under free boundary conditions, at this impact speed no damage was observed, while cracks form when the sample rests on a fixed metal table. A 3D peridynamic model for dynamic brittle fracture is used to first validate it against the Kalthoff-Winkler experiment, in which a pre-notched steel plate is impacted at 32m/s by a cylindrical impactor and brittle cracks grow at a 70-degree angle with the impact direction. A new peridynamic model for a brick-mortar microstructure is created using the properties of PMMA and rubber. Because simulating the supporting table used in the experiments would be too costly, we choose to work with free boundary conditions and a higher impact speed (500m/s), to observe damage in the peridynamic model of the brick-mortar structure. Under these conditions, the damage is limited to the contacting brick only. The soft phase is able to limit its spread. Other boundary conditions are likely to cause wave reflections and reinforcements, which can damage other bricks, far from the impact point, as observed in our experiments. Advisor: Florin Bobar

    Ray-traced radiative transfer on massively threaded architectures

    Get PDF
    In this thesis, I apply techniques from the field of computer graphics to ray tracing in astrophysical simulations, and introduce the grace software library. This is combined with an extant radiative transfer solver to produce a new package, taranis. It allows for fully-parallel particle updates via per-particle accumulation of rates, followed by a forward Euler integration step, and is manifestly photon-conserving. To my knowledge, taranis is the first ray-traced radiative transfer code to run on graphics processing units and target cosmological-scale smooth particle hydrodynamics (SPH) datasets. A significant optimization effort is undertaken in developing grace. Contrary to typical results in computer graphics, it is found that the bounding volume hierarchies (BVHs) used to accelerate the ray tracing procedure need not be of high quality; as a result, extremely fast BVH construction times are possible (< 0.02 microseconds per particle in an SPH dataset). I show that this exceeds the performance researchers might expect from CPU codes by at least an order of magnitude, and compares favourably to a state-of-the-art ray tracing solution. Similar results are found for the ray-tracing itself, where again techniques from computer graphics are examined for effectiveness with SPH datasets, and new optimizations proposed. For high per-source ray counts (≳ 104), grace can reduce ray tracing run times by up to two orders of magnitude compared to extant CPU solutions developed within the astrophysics community, and by a factor of a few compared to a state-of-the-art solution. taranis is shown to produce expected results in a suite of de facto cosmological radiative transfer tests cases. For some cases, it currently out-performs a serial, CPU-based alternative by a factor of a few. Unfortunately, for the most realistic test its performance is extremely poor, making the current taranis code unsuitable for cosmological radiative transfer. The primary reason for this failing is found to be a small minority of particles which always dominate the timestep criteria. Several plausible routes to mitigate this problem, while retaining parallelism, are put forward
    corecore