262 research outputs found
Load-Balancing for Parallel Delaunay Triangulations
Computing the Delaunay triangulation (DT) of a given point set in
is one of the fundamental operations in computational geometry.
Recently, Funke and Sanders (2017) presented a divide-and-conquer DT algorithm
that merges two partial triangulations by re-triangulating a small subset of
their vertices - the border vertices - and combining the three triangulations
efficiently via parallel hash table lookups. The input point division should
therefore yield roughly equal-sized partitions for good load-balancing and also
result in a small number of border vertices for fast merging. In this paper, we
present a novel divide-step based on partitioning the triangulation of a small
sample of the input points. In experiments on synthetic and real-world data
sets, we achieve nearly perfectly balanced partitions and small border
triangulations. This almost cuts running time in half compared to
non-data-sensitive division schemes on inputs exhibiting an exploitable
underlying structure.Comment: Short version submitted to EuroPar 201
Analysis of and workarounds for element reversal for a finite element-based algorithm for warping triangular and tetrahedral meshes
We consider an algorithm called FEMWARP for warping triangular and
tetrahedral finite element meshes that computes the warping using the finite
element method itself. The algorithm takes as input a two- or three-dimensional
domain defined by a boundary mesh (segments in one dimension or triangles in
two dimensions) that has a volume mesh (triangles in two dimensions or
tetrahedra in three dimensions) in its interior. It also takes as input a
prescribed movement of the boundary mesh. It computes as output updated
positions of the vertices of the volume mesh. The first step of the algorithm
is to determine from the initial mesh a set of local weights for each interior
vertex that describes each interior vertex in terms of the positions of its
neighbors. These weights are computed using a finite element stiffness matrix.
After a boundary transformation is applied, a linear system of equations based
upon the weights is solved to determine the final positions of the interior
vertices. The FEMWARP algorithm has been considered in the previous literature
(e.g., in a 2001 paper by Baker). FEMWARP has been succesful in computing
deformed meshes for certain applications. However, sometimes FEMWARP reverses
elements; this is our main concern in this paper. We analyze the causes for
this undesirable behavior and propose several techniques to make the method
more robust against reversals. The most successful of the proposed methods
includes combining FEMWARP with an optimization-based untangler.Comment: Revision of earlier version of paper. Submitted for publication in
BIT Numerical Mathematics on 27 April 2010. Accepted for publication on 7
September 2010. Published online on 9 October 2010. The final publication is
available at http://www.springerlink.co
Recommended from our members
Streaming Compression of Tetrahedral Volume Meshes
Geometry processing algorithms have traditionally assumed that the input data is entirely in main memory and available for random access. This assumption does not scale to large data sets, as exhausting the physical memory typically leads to IO-inefficient thrashing. Recent works advocate processing geometry in a 'streaming' manner, where computation and output begin as soon as possible. Streaming is suitable for tasks that require only local neighbor information and batch process an entire data set. We describe a streaming compression scheme for tetrahedral volume meshes that encodes vertices and tetrahedra in the order they are written. To keep the memory footprint low, the compressor is informed when vertices are referenced for the last time (i.e. are finalized). The compression achieved depends on how coherent the input order is and how many tetrahedra are buffered for local reordering. For reasonably coherent orderings and a buffer of 10,000 tetrahedra, we achieve compression rates that are only 25 to 40 percent above the state-of-the-art, while requiring drastically less memory resources and less than half the processing time
Universal Scaling of Optimal Current Distribution in Transportation Networks
Transportation networks are inevitably selected with reference to their
global cost which depends on the strengths and the distribution of the embedded
currents. We prove that optimal current distributions for a uniformly injected
d-dimensional network exhibit robust scale-invariance properties, independently
of the particular cost function considered, as long as it is convex. We find
that, in the limit of large currents, the distribution decays as a power law
with an exponent equal to (2d-1)/(d-1). The current distribution can be exactly
calculated in d=2 for all values of the current. Numerical simulations further
suggest that the scaling properties remain unchanged for both random injections
and by randomizing the convex cost functions.Comment: 5 pages, 5 figure
Evaluating Elevated Convection with the Downdraft Convective Inhibition
A method for evaluating the penetration of a stable layer by an elevated convective downdraft is discussed. Some controversy exists on the communityâs ability to define truly elevated convection from surface-based convection. By comparing the downdraft convective inhibition (DCIN) to the downdraft convective available potential energy (DCAPE), we determine that downdraft penetration potential is progressively enabled as the DCIN is progressively smaller than the DCAPE; inversely as DCIN increases over DCAPE, so does the likelihood of purely elevated convection. Serial vertical soundings and accompanying analyses are provided to support this finding
Minimizing the stabbing number of matchings, trees, and triangulations
The (axis-parallel) stabbing number of a given set of line segments is the
maximum number of segments that can be intersected by any one (axis-parallel)
line. This paper deals with finding perfect matchings, spanning trees, or
triangulations of minimum stabbing number for a given set of points. The
complexity of these problems has been a long-standing open question; in fact,
it is one of the original 30 outstanding open problems in computational
geometry on the list by Demaine, Mitchell, and O'Rourke. The answer we provide
is negative for a number of minimum stabbing problems by showing them NP-hard
by means of a general proof technique. It implies non-trivial lower bounds on
the approximability. On the positive side we propose a cut-based integer
programming formulation for minimizing the stabbing number of matchings and
spanning trees. We obtain lower bounds (in polynomial time) from the
corresponding linear programming relaxations, and show that an optimal
fractional solution always contains an edge of at least constant weight. This
result constitutes a crucial step towards a constant-factor approximation via
an iterated rounding scheme. In computational experiments we demonstrate that
our approach allows for actually solving problems with up to several hundred
points optimally or near-optimally.Comment: 25 pages, 12 figures, Latex. To appear in "Discrete and Computational
Geometry". Previous version (extended abstract) appears in SODA 2004, pp.
430-43
Variational tetrahedral meshing
In this paper, a novel Delaunay-based variational approach to isotropic tetrahedral meshing is presented. To achieve both robustness and efficiency, we minimize a simple mesh-dependent energy through global updates of both vertex positions and connectivity. As this energy is known to be the â 1 distance between an isotropic quadratic function and its linear interpolation on the mesh, our minimization procedure generates well-shaped tetrahedra. Mesh design is controlled through a gradation smoothness parameter and selection of the desired number of vertices. We provide the foundations of our approach by explaining both the underlying variational principle and its geometric interpretation. We demonstrate the quality of the resulting meshes through a series of examples
A radium assay technique using hydrous titanium oxide adsorbent for the Sudbury Neutrino Observatory
As photodisintegration of deuterons mimics the disintegration of deuterons by
neutrinos, the accurate measurement of the radioactivity from thorium and
uranium decay chains in the heavy water in the Sudbury Neutrino Observatory
(SNO) is essential for the determination of the total solar neutrino flux. A
radium assay technique of the required sensitivity is described that uses
hydrous titanium oxide adsorbent on a filtration membrane together with a
beta-alpha delayed coincidence counting system. For a 200 tonne assay the
detection limit for 232Th is a concentration of 3 x 10^(-16) g Th/g water and
for 238U of 3 x 10^(-16) g U/g water. Results of assays of both the heavy and
light water carried out during the first two years of data collection of SNO
are presented.Comment: 12 pages, 4 figure
- âŠ