830 research outputs found
Scan registration for autonomous mining vehicles using 3D-NDT
Scan registration is an essential subtask when building maps based on range finder data from mobile robots. The problem is to deduce how the robot has moved between consecutive scans, based on the shape of overlapping portions of the scans. This paper presents a new algorithm for registration of 3D data. The algorithm is a generalization and improvement of the normal distributions transform (NDT) for 2D data developed by Biber and Strasser, which allows for accurate registration using a memory-efficient representation of the scan surface. A detailed quantitative and qualitative comparison of the new algorithm with the 3D version of the popular ICP (iterative closest point) algorithm is presented. Results with actual mine data, some of which were collected with a new prototype 3D laser scanner, show that the presented algorithm is faster and slightly more reliable than the standard ICP algorithm for 3D registration, while using a more memory efficient scan surface representation
Using 3D Voronoi grids in radiative transfer simulations
Probing the structure of complex astrophysical objects requires effective
three-dimensional (3D) numerical simulation of the relevant radiative transfer
(RT) processes. As with any numerical simulation code, the choice of an
appropriate discretization is crucial. Adaptive grids with cuboidal cells such
as octrees have proven very popular, however several recently introduced
hydrodynamical and RT codes are based on a Voronoi tessellation of the spatial
domain. Such an unstructured grid poses new challenges in laying down the rays
(straight paths) needed in RT codes. We show that it is straightforward to
implement accurate and efficient RT on 3D Voronoi grids. We present a method
for computing straight paths between two arbitrary points through a 3D Voronoi
grid in the context of a RT code. We implement such a grid in our RT code
SKIRT, using the open source library Voro++ to obtain the relevant properties
of the Voronoi grid cells based solely on the generating points. We compare the
results obtained through the Voronoi grid with those generated by an octree
grid for two synthetic models, and we perform the well-known Pascucci RT
benchmark using the Voronoi grid. The presented algorithm produces correct
results for our test models. Shooting photon packages through the geometrically
much more complex 3D Voronoi grid is only about three times slower than the
equivalent process in an octree grid with the same number of cells, while in
fact the total number of Voronoi grid cells may be lower for an equally good
representation of the density field. We conclude that the benefits of using a
Voronoi grid in RT simulation codes will often outweigh the somewhat slower
performance.Comment: 9 pages, 7 figures, accepted by A
Interactive isosurface ray tracing of large octree volumes
Journal ArticleWe present a technique for ray tracing isosurfaces of large compressed structured volumes. Data is first converted into a losslesscompression octree representation that occupies a fraction of the original memory footprint. An isosurface is then dynamically rendered by tracing rays through a min/max hierarchy inside interior octree nodes. By embedding the acceleration tree and scalar data in a single structure and employing optimized octree hash schemes, we achieve competitive frame rates on common multicore architectures, and render large time-variant data that could not otherwise be accomodated
Highly efficient computer oriented octree data structure and neighbors search in 3D GIS spatial
Three Dimensional (3D) have given new perspective in various field such as urban planning, hydrology, infrastructure modeling, geology etc due to its capability of handling real world object in more realistic manners, rather than two-dimensional (2D) approach. However, implementation of 3D spatial analysis in the real world has proven difficult due to the complexity of algorithm, computational power and time consuming. Existing GIS system enables 2D and two-and-a-half dimensional (2.5D) spatial datasets, but less capable of supporting 3D data structures. Recent development in Octree see more effort to improve weakness of octree in finding neighbor node by using various address encoding scheme with specific rule to eliminate the need of tree traversal. This paper proposed a new method to speed up neighbor searching and eliminating the needs of complex operation to extract spatial information from octree by preserving 3D spatial information directly from Octree data structure. This new method able to achieve O(1) complexity and utilizing Bit Manipulation Instruction 2 (BMI2) to speedup address encoding, extraction and voxel search 700% compared with generic implementation
Adaptive approximation of signed distance fields through piecewise continuous interpolation
In this paper, we present an adaptive structure to represent a signed distance field through trilinear or tricubic interpolation of values, and derivatives, that allows for fast querying of the field. We also provide a method to decide when to subdivide a node to achieve a provided threshold error. Both the numerical error control, and the values needed to build the interpolants, require the evaluation of the input field. Still, both are designed to minimize the total number of evaluations. C0 continuity is guaranteed for both the trilinear and tricubic version of the algorithm. Furthermore, we describe how to preserve C1 continuity between nodes of different levels when using a tricubic interpolant, and provide a proof that this property is maintained. Finally, we illustrate the usage of our approach in several applications, including direct rendering using sphere marching.This work has been partially funded by Ministeri de Ciència i Innovació (MICIN), Agencia Estatal de Investigación (AEI) and the Fons Europeu de Desenvolupament Regional (FEDER) (project PID2021-122136OB-C21 funded by MCIN/AEI/10.13039/501100011033/FEDER, UE). The first author gratefully acknowledges the Universitat Politècnica de Catalunya and Banco Santander for the financial support of his predoctoral grant FPI-UPC grant.Peer ReviewedPostprint (published version
Fast, Sparse Matrix Factorization and Matrix Algebra via Random Sampling for Integral Equation Formulations in Electromagnetics
Many systems designed by electrical & computer engineers rely on electromagnetic (EM) signals to transmit, receive, and extract either information or energy. In many cases, these systems are large and complex. Their accurate, cost-effective design requires high-fidelity computer modeling of the underlying EM field/material interaction problem in order to find a design with acceptable system performance. This modeling is accomplished by projecting the governing Maxwell equations onto finite dimensional subspaces, which results in a large matrix equation representation (Zx = b) of the EM problem. In the case of integral equation-based formulations of EM problems, the M-by-N system matrix, Z, is generally dense. For this reason, when treating large problems, it is necessary to use compression methods to store and manipulate Z. One such sparse representation is provided by so-called H^2 matrices. At low-to-moderate frequencies, H^2 matrices provide a controllably accurate data-sparse representation of Z.
The scale at which problems in EM are considered ``large\u27\u27 is continuously being redefined to be larger. This growth of problem scale is not only happening in EM, but respectively across all other sub-fields of computational science as well. The pursuit of increasingly large problems is unwavering in all these sub-fields, and this drive has long outpaced the rate of advancements in processing and storage capabilities in computing. This has caused computational science communities to now face the computational limitations of standard linear algebraic methods that have been relied upon for decades to run quickly and efficiently on modern computing hardware. This common set of algorithms can only produce reliable results quickly and efficiently for small to mid-sized matrices that fit into the memory of the host computer. Therefore, the drive to pursue larger problems has even began to outpace the reasonable capabilities of these common numerical algorithms; the deterministic numerical linear algebra algorithms that have gotten matrix computation this far have proven to be inadequate for many problems of current interest. This has computational science communities focusing on improvements in their mathematical and software approaches in order to push further advancement. Randomized numerical linear algebra (RandNLA) is an emerging area that both academia and industry believe to be strong candidates to assist in overcoming the limitations faced when solving massive and computationally expensive problems.
This thesis presents results of recent work that uses a random sampling method (RSM) to implement algebraic operations involving multiple H^2 matrices. Significantly, this work is done in a manner that is non-invasive to an existing H^2 code base for filling and factoring H^2 matrices. The work presented thus expands the existing code\u27s capabilities with minimal impact on existing (and well-tested) applications. In addition to this work with randomized H^2 algebra, improvements in sparse factorization methods for the compressed H^2 data structure will also be presented. The reported developments in filling and factoring H^2 data structures assist in, and allow for, the further pursuit of large and complex problems in computational EM (CEM) within simulation code bases that utilize the H^2 data structure
Kinetic Solvers with Adaptive Mesh in Phase Space
An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for
solving multi-dimensional kinetic equations by the discrete velocity method. A
Cartesian mesh for both configuration (r) and velocity (v) spaces is produced
using a tree of trees data structure. The mesh in r-space is automatically
generated around embedded boundaries and dynamically adapted to local solution
properties. The mesh in v-space is created on-the-fly for each cell in r-space.
Mappings between neighboring v-space trees implemented for the advection
operator in configuration space. We have developed new algorithms for solving
the full Boltzmann and linear Boltzmann equations with AMPS. Several recent
innovations were used to calculate the discrete Boltzmann collision integral
with dynamically adaptive mesh in velocity space: importance sampling,
multi-point projection method, and the variance reduction method. We have
developed an efficient algorithm for calculating the linear Boltzmann collision
integral for elastic and inelastic collisions in a Lorentz gas. New AMPS
technique has been demonstrated for simulations of hypersonic rarefied gas
flows, ion and electron kinetics in weakly ionized plasma, radiation and light
particle transport through thin films, and electron streaming in
semiconductors. We have shown that AMPS allows minimizing the number of cells
in phase space to reduce computational cost and memory usage for solving
challenging kinetic problems
Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling
Unlike on images, semantic learning on 3D point clouds using a deep network
is challenging due to the naturally unordered data structure. Among existing
works, PointNet has achieved promising results by directly learning on point
sets. However, it does not take full advantage of a point's local neighborhood
that contains fine-grained structural information which turns out to be helpful
towards better semantic learning. In this regard, we present two new operations
to improve PointNet with a more efficient exploitation of local structures. The
first one focuses on local 3D geometric structures. In analogy to a convolution
kernel for images, we define a point-set kernel as a set of learnable 3D points
that jointly respond to a set of neighboring data points according to their
geometric affinities measured by kernel correlation, adapted from a similar
technique for point cloud registration. The second one exploits local
high-dimensional feature structures by recursive feature aggregation on a
nearest-neighbor-graph computed from 3D positions. Experiments show that our
network can efficiently capture local information and robustly achieve better
performances on major datasets. Our code is available at
http://www.merl.com/research/license#KCNetComment: Accepted in CVPR'18. *indicates equal contributio
- …