236 research outputs found
Fast Back-Projection for Non-Line of Sight Reconstruction
Recent works have demonstrated non-line of sight (NLOS) reconstruction by
using the time-resolved signal frommultiply scattered light. These works
combine ultrafast imaging systems with computation, which back-projects the
recorded space-time signal to build a probabilistic map of the hidden geometry.
Unfortunately, this computation is slow, becoming a bottleneck as the imaging
technology improves. In this work, we propose a new back-projection technique
for NLOS reconstruction, which is up to a thousand times faster than previous
work, with almost no quality loss. We base on the observation that the hidden
geometry probability map can be built as the intersection of the three-bounce
space-time manifolds defined by the light illuminating the hidden geometry and
the visible point receiving the scattered light from such hidden geometry. This
allows us to pose the reconstruction of the hidden geometry as the voxelization
of these space-time manifolds, which has lower theoretic complexity and is
easily implementable in the GPU. We demonstrate the efficiency and quality of
our technique compared against previous methods in both captured and synthetic
dat
Automated Digital Machining for Parallel Processors
When a process engineer creates a tool path a number of fixed decisions are made that inevitably produce sub-optimal results. This is because it is impossible to process all of the tradeoffs before generating the tool path. The research presents a methodology to support a process engineers attempt to generate optimal tool paths by performing automated digital machining and analysis. This methodology automatically generates and evaluates tool paths based on parallel processing of digital part models and generalized cutting geometry. Digital part models are created by voxelizing STL files and the resulting digital part surfaces are obtained based on casting rays into the part model. Tool paths are generated based on a general path template and updated based on generalized tool geometry and part surface information. The material removed by the generalized cutter as it follows the path is used to obtain path metrics. The paths are evaluated based on the path metrics of material removal rate, machining time, and amount of scallop. This methodology is a parallel processing accelerated framework suitable for generating tool paths in parallel enabling the process engineer to rank and select the best tool path for the job
Binarized-octree generation for Cartesian adaptive mesh refinement around immersed geometries
We revisit the generation of balanced octrees for adaptive mesh refinement (AMR) of Cartesian domains with immersed complex geometries. In a recent short note (Hasbestan and Senocak, 2017) [42], we showed that the data locality of the Z-order curve in a hashed linear-octree generation method may not be perfect because of potential collisions in the hash table. Building on that observation, we propose a binarized-octree generation method that complies with the Z-order curve exactly. Similar to a hashed linear-octree generation method, we use Morton encoding to index the nodes of an octree, but use a red-black tree in place of the hash table. Red-black tree is a special kind of a binary tree, which we use for insertion and deletion of elements during mesh adaptation. By strictly working with the bitwise representation of an octree, we remove computer hardware limitations on the depth of adaptation on a single processor. Additionally, we introduce a geometry encoding technique for rapidly tagging a solid geometry for mesh refinement. Our results for several geometries with different levels of adaptations show that the binarized-octree generation method outperforms the linear-octree generation method in terms of runtime performance at the expense of only a slight increase in memory usage. The current AMR capability, rebl-AMR, is available as open-source software
Visualization and inspection of the geometry of particle packings
Gegenstand dieser Dissertation ist die Entwicklung von effizienten Verfahren zur Visualisierung und
Inspektion der Geometrie von Partikelmischungen. Um das Verhalten der Simulation für die
Partikelmischung besser zu verstehen und zu überwachen, sollten nicht nur die Partikel selbst, sondern auch
spezielle von den Partikeln gebildete Bereiche, die den Simulationsfortschritt und die räumliche Verteilung
von Hotspots anzeigen können, visualisiert werden können. Dies sollte auch bei großen Packungen mit
Millionen von Partikeln zumindest mit einer interaktiven Darstellungsgeschwindigkeit möglich sein. . Da
die Simulation auf der Grafikkarte (GPU) durchgeführt wird, sollten die Visualisierungstechniken die Daten
des GPU-Speichers vollständig nutzen.
Um die Qualität von trockenen Partikelmischungen wie Beton zu verbessern, wurde der
Korngrößenverteilung große Aufmerksamkeit gewidmet, die die Raumfüllungsrate hauptsächlich
beeinflusst und daher zwei der wichtigsten Eigenschaften des Betons bestimmt: die strukturelle Robustheit
und die Haltbarkeit. Anhand der Korngrößenverteilung kann die Raumfüllungsrate durch
Computersimulationen bestimmt werden, die analytischen Ansätzen in der Praxis wegen der breiten
Größenverteilung der Partikel oft überlegen sind. Eine der weit verbreiteten Simulationsmethoden ist das
Collective Rearrangement, bei dem die Partikel zunächst an zufälligen Positionen innerhalb eines Behälters
platziert werden. Später werden Überlappungen zwischen Partikeln aufgelöst, indem überlappende Partikel
voneinander weggedrückt werden. Durch geschickte Anpassung der Behältergröße während der Simulation,
kann die Collective Rearrangement-Methode am Ende eine ziemlich dichte Partikelpackung generieren.
Es ist jedoch sehr schwierig, den gesamten Simulationsprozess ohne ein interaktives Visualisierungstool zu
optimieren oder dort Fehler zu finden.
Ausgehend von der etablierten rasterisierungsbasierten Methode zum Darstellen einer großen Menge von
Kugeln, bietet diese Dissertation zunächst schnelle und pixelgenaue Methoden zur neuartigen
Visualisierung der Überlappungen und Freiräume zwischen kugelförmigen Partikeln innerhalb eines
Behälters.. Die auf Rasterisierung basierenden Verfahren funktionieren gut für kleinere Partikelpackungen
bis ca. eine Million Kugeln. Bei größeren Packungen entstehen Probleme durch die lineare Laufzeit und
den Speicherverbrauch. Zur Lösung dieses Problems werden neue Methoden mit Hilfe von Raytracing
zusammen mit zwei neuen Arten von Bounding-Volume-Hierarchien (BVHs) bereitgestellt. Diese können
den Raytracing-Prozess deutlich beschleunigen --- die erste kann die vorhandene Datenstruktur für die
Simulation wiederverwenden und die zweite ist speichereffizienter. Beide BVHs nutzen die Idee des Loose
Octree und sind die ersten ihrer Art, die die Größe von Primitiven für interaktives Raytracing mit häufig
aktualisierten Beschleunigungsdatenstrukturen berücksichtigen. Darüber hinaus können die
Visualisierungstechniken in dieser Dissertation auch angepasst werden, um Eigenschaften wie das
Volumen bestimmter Bereiche zu berechnen.
All diese Visualisierungstechniken werden dann auf den Fall nicht-sphärischer Partikel erweitert, bei denen
ein nicht-sphärisches Partikel durch ein starres System von Kugeln angenähert wird, um die vorhandene
kugelbasierte Simulation wiederverwenden zu können. Dazu wird auch eine neue GPU-basierte Methode
zum effizienten Füllen eines nicht-kugelförmigen Partikels mit polydispersen überlappenden Kugeln
vorgestellt, so dass ein Partikel mit weniger Kugeln gefüllt werden kann, ohne die Raumfüllungsrate zu
beeinträchtigen. Dies erleichtert sowohl die Simulation als auch die Visualisierung.
Basierend auf den Arbeiten in dieser Dissertation können ausgefeiltere Algorithmen entwickelt werden, um
großskalige nicht-sphärische Partikelmischungen effizienter zu visualisieren. Weiterhin kann in Zukunft
Hardware-Raytracing neuerer Grafikkarten anstelle des in dieser Dissertation eingesetzten Software-Raytracing verwendet werden. Die neuen Techniken können auch als Grundlage für die interaktive
Visualisierung anderer partikelbasierter Simulationen verwendet werden, bei denen spezielle Bereiche wie
Freiräume oder Überlappungen zwischen Partikeln relevant sind.The aim of this dissertation is to find efficient techniques for visualizing and inspecting the geometry of
particle packings. Simulations of such packings are used e.g. in material sciences to predict properties of
granular materials. To better understand and supervise the behavior of these simulations, not only the
particles themselves but also special areas formed by the particles that can show the progress of the
simulation and spatial distribution of hot spots, should be visualized. This should be possible with a frame
rate that allows interaction even for large scale packings with millions of particles. Moreover, given the
simulation is conducted in the GPU, the visualization techniques should take full use of the data in the GPU
memory.
To improve the performance of granular materials like concrete, considerable attention has been paid to the
particle size distribution, which is the main determinant for the space filling rate and therefore affects two
of the most important properties of the concrete: the structural robustness and the durability. Given the
particle size distribution, the space filling rate can be determined by computer simulations, which are often
superior to analytical approaches due to irregularities of particles and the wide range of size distribution in
practice. One of the widely adopted simulation methods is the collective rearrangement, for which particles
are first placed at random positions inside a container, later overlaps between particles will be resolved by
letting overlapped particles push away from each other to fill empty space in the container. By cleverly
adjusting the size of the container according to the process of the simulation, the collective rearrangement
method could get a pretty dense particle packing in the end. However, it is very hard to fine-tune or debug
the whole simulation process without an interactive visualization tool.
Starting from the well-established rasterization-based method to render spheres, this dissertation first
provides new fast and pixel-accurate methods to visualize the overlaps and free spaces between spherical
particles inside a container. The rasterization-based techniques perform well for small scale particle
packings but deteriorate for large scale packings due to the large memory requirements that are hard to be
approximated correctly in advance. To address this problem, new methods based on ray tracing are provided
along with two new kinds of bounding volume hierarchies (BVHs) to accelerate the ray tracing process ---
the first one can reuse the existing data structure for simulation and the second one is more memory efficient.
Both BVHs utilize the idea of loose octree and are the first of their kind to consider the size of primitives
for interactive ray tracing with frequently updated acceleration structures. Moreover, the visualization
techniques provided in this dissertation can also be adjusted to calculate properties such as volumes of the
specific areas.
All these visualization techniques are then extended to non-spherical particles, where a non-spherical
particle is approximated by a rigid system of spheres to reuse the existing simulation. To this end a new
GPU-based method is presented to fill a non-spherical particle with polydisperse possibly overlapping
spheres efficiently, so that a particle can be filled with fewer spheres without sacrificing the space filling
rate. This eases both simulation and visualization.
Based on approaches presented in this dissertation, more sophisticated algorithms can be developed to
visualize large scale non-spherical particle mixtures more efficiently. Besides, one can try to exploit the
hardware ray tracing of more recent graphic cards instead of maintaining the software ray tracing as in this
dissertation. The new techniques can also become the basis for interactively visualizing other particle-based
simulations, where special areas such as free space or overlaps between particles are of interest
Doctor of Philosophy
dissertationVisualizing surfaces is a fundamental technique in computer science and is frequently used across a wide range of fields such as computer graphics, biology, engineering, and scientific visualization. In many cases, visualizing an interface between boundaries can provide meaningful analysis or simplification of complex data. Some examples include physical simulation for animation, multimaterial mesh extraction in biophysiology, flow on airfoils in aeronautics, and integral surfaces. However, the quest for high-quality visualization, coupled with increasingly complex data, comes with a high computational cost. Therefore, new techniques are needed to solve surface visualization problems within a reasonable amount of time while also providing sophisticated visuals that are meaningful to scientists and engineers. In this dissertation, novel techniques are presented to facilitate surface visualization. First, a particle system for mesh extraction is parallelized on the graphics processing unit (GPU) with a red-black update scheme to achieve an order of magnitude speed-up over a central processing unit (CPU) implementation. Next, extending the red-black technique to multiple materials showed inefficiencies on the GPU. Therefore, we borrow the underlying data structure from the closest point method, the closest point embedding, and the particle system solver is switched to hierarchical octree-based approach on the GPU. Third, to demonstrate that the closest point embedding is a fast, flexible data structure for surface particles, it is adapted to unsteady surface flow visualization at near-interactive speeds. Finally, the closest point embedding is a three-dimensional dense structure that does not scale well. Therefore, we introduce a closest point sparse octree that allows the closest point embedding to scale to higher resolution. Further, we demonstrate unsteady line integral convolution using the closest point method
Sparse Volumetric Deformation
Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently.
The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution.
This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
An exact general remeshing scheme applied to physically conservative voxelization
We present an exact general remeshing scheme to compute analytic integrals of
polynomial functions over the intersections between convex polyhedral cells of
old and new meshes. In physics applications this allows one to ensure global
mass, momentum, and energy conservation while applying higher-order polynomial
interpolation. We elaborate on applications of our algorithm arising in the
analysis of cosmological N-body data, computer graphics, and continuum
mechanics problems.
We focus on the particular case of remeshing tetrahedral cells onto a
Cartesian grid such that the volume integral of the polynomial density function
given on the input mesh is guaranteed to equal the corresponding integral over
the output mesh. We refer to this as "physically conservative voxelization".
At the core of our method is an algorithm for intersecting two convex
polyhedra by successively clipping one against the faces of the other. This
algorithm is an implementation of the ideas presented abstractly by Sugihara
(1994), who suggests using the planar graph representations of convex polyhedra
to ensure topological consistency of the output. This makes our implementation
robust to geometric degeneracy in the input. We employ a simplicial
decomposition to calculate moment integrals up to quadratic order over the
resulting intersection domain.
We also address practical issues arising in a software implementation,
including numerical stability in geometric calculations, management of
cancellation errors, and extension to two dimensions. In a comparison to recent
work, we show substantial performance gains. We provide a C implementation
intended to be a fast, accurate, and robust tool for geometric calculations on
polyhedral mesh elements.Comment: Code implementation available at https://github.com/devonmpowell/r3
- …