7,368 research outputs found
Optimized Anisotropic Rotational Invariant Diffusion Scheme on Cone-Beam CT
Cone-beam computed tomography (CBCT) is an important image modality for dental surgery planning, with high resolution images at a relative low radiation dose. In these scans the mandibular canal is hardly visible, this is a problem for implant surgery planning. We use anisotropic diffusion filtering to remove noise and enhance the mandibular canal in CBCT scans. For the diffusion tensor we use hybrid diffusion with a continuous switch (HDCS), suitable for filtering both tubular as planar image structures. We focus in this paper on the diffusion discretization schemes. The standard scheme shows good isotropic filtering behavior but is not rotational invariant, the diffusion scheme of Weickert is rotational invariant but suffers from checkerboard artifacts. We introduce a new scheme, in which we numerically optimize the image derivatives. This scheme is rotational invariant and shows good isotropic filtering properties on both synthetic as real CBCT data
A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration
The ability to build maps is a key functionality for the majority of mobile
robots. A central ingredient to most mapping systems is the registration or
alignment of the recorded sensor data. In this paper, we present a general
methodology for photometric registration that can deal with multiple different
cues. We provide examples for registering RGBD as well as 3D LIDAR data. In
contrast to popular point cloud registration approaches such as ICP our method
does not rely on explicit data association and exploits multiple modalities
such as raw range and image data streams. Color, depth, and normal information
are handled in an uniform manner and the registration is obtained by minimizing
the pixel-wise difference between two multi-channel images. We developed a
flexible and general framework and implemented our approach inside that
framework. We also released our implementation as open source C++ code. The
experiments show that our approach allows for an accurate registration of the
sensor data without requiring an explicit data association or model-specific
adaptations to datasets or sensors. Our approach exploits the different cues in
a natural and consistent way and the registration can be done at framerate for
a typical range or imaging sensor.Comment: 8 page
A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
In this paper we present a practical approach for generating an
occlusion-free textured 3D map of urban facades by the synergistic use of
terrestrial images, 3D point clouds and area-based information. Particularly in
dense urban environments, the high presence of urban objects in front of the
facades causes significant difficulties for several stages in computational
building modeling. Major challenges lie on the one hand in extracting complete
3D facade quadrilateral delimitations and on the other hand in generating
occlusion-free facade textures. For these reasons, we describe a
straightforward approach for completing and recovering facade geometry and
textures by exploiting the data complementarity of terrestrial multi-source
imagery and area-based information
Creating Simplified 3D Models with High Quality Textures
This paper presents an extension to the KinectFusion algorithm which allows
creating simplified 3D models with high quality RGB textures. This is achieved
through (i) creating model textures using images from an HD RGB camera that is
calibrated with Kinect depth camera, (ii) using a modified scheme to update
model textures in an asymmetrical colour volume that contains a higher number
of voxels than that of the geometry volume, (iii) simplifying dense polygon
mesh model using quadric-based mesh decimation algorithm, and (iv) creating and
mapping 2D textures to every polygon in the output 3D model. The proposed
method is implemented in real-time by means of GPU parallel processing.
Visualization via ray casting of both geometry and colour volumes provides
users with a real-time feedback of the currently scanned 3D model. Experimental
results show that the proposed method is capable of keeping the model texture
quality even for a heavily decimated model and that, when reconstructing small
objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques
and Applications (DICTA), Page 1 -
DiffComplete: Diffusion-based Generative 3D Shape Completion
We introduce a new diffusion-based approach for shape completion on 3D range
scans. Compared with prior deterministic and probabilistic methods, we strike a
balance between realism, multi-modality, and high fidelity. We propose
DiffComplete by casting shape completion as a generative task conditioned on
the incomplete shape. Our key designs are two-fold. First, we devise a
hierarchical feature aggregation mechanism to inject conditional features in a
spatially-consistent manner. So, we can capture both local details and broader
contexts of the conditional inputs to control the shape completion. Second, we
propose an occupancy-aware fusion strategy in our model to enable the
completion of multiple partial shapes and introduce higher flexibility on the
input conditions. DiffComplete sets a new SOTA performance (e.g., 40% decrease
on l_1 error) on two large-scale 3D shape completion benchmarks. Our completed
shapes not only have a realistic outlook compared with the deterministic
methods but also exhibit high similarity to the ground truths compared with the
probabilistic alternatives. Further, DiffComplete has strong generalizability
on objects of entirely unseen classes for both synthetic and real data,
eliminating the need for model re-training in various applications.Comment: Project Page: https://ruihangchu.com/diffcomplete.htm
- …