2,575 research outputs found
An exact general remeshing scheme applied to physically conservative voxelization
We present an exact general remeshing scheme to compute analytic integrals of
polynomial functions over the intersections between convex polyhedral cells of
old and new meshes. In physics applications this allows one to ensure global
mass, momentum, and energy conservation while applying higher-order polynomial
interpolation. We elaborate on applications of our algorithm arising in the
analysis of cosmological N-body data, computer graphics, and continuum
mechanics problems.
We focus on the particular case of remeshing tetrahedral cells onto a
Cartesian grid such that the volume integral of the polynomial density function
given on the input mesh is guaranteed to equal the corresponding integral over
the output mesh. We refer to this as "physically conservative voxelization".
At the core of our method is an algorithm for intersecting two convex
polyhedra by successively clipping one against the faces of the other. This
algorithm is an implementation of the ideas presented abstractly by Sugihara
(1994), who suggests using the planar graph representations of convex polyhedra
to ensure topological consistency of the output. This makes our implementation
robust to geometric degeneracy in the input. We employ a simplicial
decomposition to calculate moment integrals up to quadratic order over the
resulting intersection domain.
We also address practical issues arising in a software implementation,
including numerical stability in geometric calculations, management of
cancellation errors, and extension to two dimensions. In a comparison to recent
work, we show substantial performance gains. We provide a C implementation
intended to be a fast, accurate, and robust tool for geometric calculations on
polyhedral mesh elements.Comment: Code implementation available at https://github.com/devonmpowell/r3
Fast algorithms for spherical harmonic expansions, III
We accelerate the computation of spherical harmonic transforms, using what is
known as the butterfly scheme. This provides a convenient alternative to the
approach taken in the second paper from this series on "Fast algorithms for
spherical harmonic expansions." The requisite precomputations become manageable
when organized as a "depth-first traversal" of the program's control-flow
graph, rather than as the perhaps more natural "breadth-first traversal" that
processes one-by-one each level of the multilevel procedure. We illustrate the
results via several numerical examples.Comment: 14 pages, 1 figure, 6 table
Algorithmic patterns for -matrices on many-core processors
In this work, we consider the reformulation of hierarchical ()
matrix algorithms for many-core processors with a model implementation on
graphics processing units (GPUs). matrices approximate specific
dense matrices, e.g., from discretized integral equations or kernel ridge
regression, leading to log-linear time complexity in dense matrix-vector
products. The parallelization of matrix operations on many-core
processors is difficult due to the complex nature of the underlying algorithms.
While previous algorithmic advances for many-core hardware focused on
accelerating existing matrix CPU implementations by many-core
processors, we here aim at totally relying on that processor type. As main
contribution, we introduce the necessary parallel algorithmic patterns allowing
to map the full matrix construction and the fast matrix-vector
product to many-core hardware. Here, crucial ingredients are space filling
curves, parallel tree traversal and batching of linear algebra operations. The
resulting model GPU implementation hmglib is the, to the best of the authors
knowledge, first entirely GPU-based Open Source matrix library of
this kind. We conclude this work by an in-depth performance analysis and a
comparative performance study against a standard matrix library,
highlighting profound speedups of our many-core parallel approach
Fine-grained visualization pipelines and lazy functional languages
The pipeline model in visualization has evolved from a conceptual model of data processing into a widely used architecture for implementing visualization systems. In the process, a number of capabilities have been introduced, including streaming of data in chunks, distributed pipelines, and demand-driven processing. Visualization systems have invariably built on stateful programming technologies, and these capabilities have had to be implemented explicitly within the lower layers of a complex hierarchy of services. The good news for developers is that applications built on top of this hierarchy can access these capabilities without concern for how they are implemented. The bad news is that by freezing capabilities into low-level services expressive power and flexibility is lost. In this paper we express visualization systems in a programming language that more naturally supports this kind of processing model. Lazy functional languages support fine-grained demand-driven processing, a natural form of streaming, and pipeline-like function composition for assembling applications. The technology thus appears well suited to visualization applications. Using surface extraction algorithms as illustrative examples, and the lazy functional language Haskell, we argue the benefits of clear and concise expression combined with fine-grained, demand-driven computation. Just as visualization provides insight into data, functional abstraction provides new insight into visualization
Multiscale approach for the network compression-friendly ordering
We present a fast multiscale approach for the network minimum logarithmic
arrangement problem. This type of arrangement plays an important role in a
network compression and fast node/link access operations. The algorithm is of
linear complexity and exhibits good scalability which makes it practical and
attractive for using on large-scale instances. Its effectiveness is
demonstrated on a large set of real-life networks. These networks with
corresponding best-known minimization results are suggested as an open
benchmark for a research community to evaluate new methods for this problem
- …