6,881 research outputs found
Data Fusion of Objects Using Techniques Such as Laser Scanning, Structured Light and Photogrammetry for Cultural Heritage Applications
In this paper we present a semi-automatic 2D-3D local registration pipeline
capable of coloring 3D models obtained from 3D scanners by using uncalibrated
images. The proposed pipeline exploits the Structure from Motion (SfM)
technique in order to reconstruct a sparse representation of the 3D object and
obtain the camera parameters from image feature matches. We then coarsely
register the reconstructed 3D model to the scanned one through the Scale
Iterative Closest Point (SICP) algorithm. SICP provides the global scale,
rotation and translation parameters, using minimal manual user intervention. In
the final processing stage, a local registration refinement algorithm optimizes
the color projection of the aligned photos on the 3D object removing the
blurring/ghosting artefacts introduced due to small inaccuracies during the
registration. The proposed pipeline is capable of handling real world cases
with a range of characteristics from objects with low level geometric features
to complex ones
Robust Photogeometric Localization over Time for Map-Centric Loop Closure
Map-centric SLAM is emerging as an alternative of conventional graph-based
SLAM for its accuracy and efficiency in long-term mapping problems. However, in
map-centric SLAM, the process of loop closure differs from that of conventional
SLAM and the result of incorrect loop closure is more destructive and is not
reversible. In this paper, we present a tightly coupled photogeometric metric
localization for the loop closure problem in map-centric SLAM. In particular,
our method combines complementary constraints from LiDAR and camera sensors,
and validates loop closure candidates with sequential observations. The
proposed method provides a visual evidence-based outlier rejection where
failures caused by either place recognition or localization outliers can be
effectively removed. We demonstrate the proposed method is not only more
accurate than the conventional global ICP methods but is also robust to
incorrect initial pose guesses.Comment: To Appear in IEEE ROBOTICS AND AUTOMATION LETTERS, ACCEPTED JANUARY
201
3D reconstruction of ribcage geometry from biplanar radiographs using a statistical parametric model approach
Rib cage 3D reconstruction is an important prerequisite for thoracic spine modelling, particularly for studies of the deformed thorax in adolescent idiopathic scoliosis. This study proposes a new method for rib cage 3D reconstruction from biplanar radiographs, using a statistical parametric model approach. Simplified parametric models were defined at the hierarchical levels of rib cage surface, rib midline and rib surface, and applied on a database of 86 trunks. The resulting parameter database served to statistical models learning which were used to quickly provide a first estimate of the reconstruction from identifications on both radiographs. This solution was then refined by manual adjustments in order to improve the matching between model and image. Accuracy was assessed by comparison with 29 rib cages from CT scans in terms of geometrical parameter differences and in terms of line-to-line error distance between the rib midlines. Intra and inter-observer reproducibility were determined regarding 20 scoliotic patients. The first estimate (mean reconstruction time of 2’30) was sufficient to extract the main rib cage global parameters with a 95% confidence interval lower than 7%, 8%, 2% and 4° for rib cage volume, antero-posterior and lateral maximal diameters and maximal rib hump, respectively. The mean error distance was 5.4 mm (max 35mm) down to 3.6 mm (max 24 mm) after the manual adjustment step (+3’30). The proposed method will improve developments of rib cage finite element modeling and evaluation of clinical outcomes.This work was funded by Paris Tech BiomecAM chair on subject specific muscular skeletal modeling, and we express our acknowledgments to the chair founders: Cotrel foundation, Société générale, Protéor Company and COVEA consortium. We extend your acknowledgements to Alina Badina for medical imaging data, Alexandre Journé for his advices, and Thomas Joubert for his technical support
Plane-Based Optimization of Geometry and Texture for RGB-D Reconstruction of Indoor Scenes
We present a novel approach to reconstruct RGB-D indoor scene with plane
primitives. Our approach takes as input a RGB-D sequence and a dense coarse
mesh reconstructed by some 3D reconstruction method on the sequence, and
generate a lightweight, low-polygonal mesh with clear face textures and sharp
features without losing geometry details from the original scene. To achieve
this, we firstly partition the input mesh with plane primitives, simplify it
into a lightweight mesh next, then optimize plane parameters, camera poses and
texture colors to maximize the photometric consistency across frames, and
finally optimize mesh geometry to maximize consistency between geometry and
planes. Compared to existing planar reconstruction methods which only cover
large planar regions in the scene, our method builds the entire scene by
adaptive planes without losing geometry details and preserves sharp features in
the final mesh. We demonstrate the effectiveness of our approach by applying it
onto several RGB-D scans and comparing it to other state-of-the-art
reconstruction methods.Comment: in International Conference on 3D Vision 2018; Models and Code: see
https://github.com/chaowang15/plane-opt-rgbd. arXiv admin note: text overlap
with arXiv:1905.0885
- …