1,685 research outputs found
Data Fusion of Objects Using Techniques Such as Laser Scanning, Structured Light and Photogrammetry for Cultural Heritage Applications
In this paper we present a semi-automatic 2D-3D local registration pipeline
capable of coloring 3D models obtained from 3D scanners by using uncalibrated
images. The proposed pipeline exploits the Structure from Motion (SfM)
technique in order to reconstruct a sparse representation of the 3D object and
obtain the camera parameters from image feature matches. We then coarsely
register the reconstructed 3D model to the scanned one through the Scale
Iterative Closest Point (SICP) algorithm. SICP provides the global scale,
rotation and translation parameters, using minimal manual user intervention. In
the final processing stage, a local registration refinement algorithm optimizes
the color projection of the aligned photos on the 3D object removing the
blurring/ghosting artefacts introduced due to small inaccuracies during the
registration. The proposed pipeline is capable of handling real world cases
with a range of characteristics from objects with low level geometric features
to complex ones
A Comparative Study of Registration Methods for RGB-D Video of Static Scenes
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.This work has been supported by a grant from the Spanish Government, DPI2013-40534-R, University of Alicante projects GRE11-01 and a grant from the Valencian Government, GV/2013/005
Comparing ICP variants on real-world data sets: Open-source library and experimental protocol
Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigatio
3D mudeli koostamine Kinect v2 kaamera abil
Kinect is an easy to use and a ordable RGB-D acquisition device that provides both spatial and color information for captured pixels. That makes it an attractive alternative to regular 3D scanning devices that usually cost signi cantly more and do not provide color info. Second generation of Kinect (v2) provides even better quality depth and color
images to user.
This thesis describes and implements method for 3D reconstruction using Kinect v2. Method suitability for various objects is tested and analyzed. In most circumstances the method provided satisfactory reconstructions unless very high resolution is desired. However some limitation were observed. Reflective and transparent surfaces cause failure due to depth capturing technology in Kinect v2, symmetric objects cause problems for described frame registration algorithm. For better understanding, Kinect v2 depth measuring process is described
Point Cloud Processing Algorithms for Environment Understanding in Intelligent Vehicle Applications
Understanding the surrounding environment including both still and moving objects is crucial to the design and optimization of intelligent vehicles. In particular, acquiring the knowledge about the vehicle environment could facilitate reliable detection of moving objects for the purpose of avoiding collisions. In this thesis, we focus on developing point cloud processing algorithms to support intelligent vehicle applications. The contributions of this thesis are three-fold.;First, inspired by the analogy between point cloud and video data, we propose to formulate a problem of reconstructing the vehicle environment (e.g., terrains and buildings) from a sequence of point cloud sets. Built upon existing point cloud registration tool such as iterated closest point (ICP), we have developed an expectation-maximization (EM)-like technique that can automatically mosaic multiple point cloud sets into a larger one characterizing the still environment surrounding the vehicle.;Second, we propose to utilize the color information (from color images captured by the RGB camera) as a supplementary source to the three-dimensional point cloud data. Such joint color and depth representation has the potential of better characterizing the surrounding environment of a vehicle. Based on the novel joint RGBD representation, we propose training a convolution neural network on color images and depth maps generated from the point cloud data.;Finally, we explore a sensor fusion method that combines the results given by a Lidar based detection algorithm and vehicle to everything (V2X) communicated data. Since Lidar and V2X respectively characterize the environmental information from complementary sources, we propose to get a better localization of the surrounding vehicles by a linear sensor fusion method. The effectiveness of the proposed sensor fusion method is verified by comparing detection error profiles
DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration
We present DeepICP - a novel end-to-end learning-based 3D point cloud
registration framework that achieves comparable registration accuracy to prior
state-of-the-art geometric methods. Different from other keypoint based methods
where a RANSAC procedure is usually needed, we implement the use of various
deep neural network structures to establish an end-to-end trainable network.
Our keypoint detector is trained through this end-to-end structure and enables
the system to avoid the inference of dynamic objects, leverages the help of
sufficiently salient features on stationary objects, and as a result, achieves
high robustness. Rather than searching the corresponding points among existing
points, the key contribution is that we innovatively generate them based on
learned matching probabilities among a group of candidates, which can boost the
registration accuracy. Our loss function incorporates both the local similarity
and the global geometric constraints to ensure all above network designs can
converge towards the right direction. We comprehensively validate the
effectiveness of our approach using both the KITTI dataset and the
Apollo-SouthBay dataset. Results demonstrate that our method achieves
comparable or better performance than the state-of-the-art geometry-based
methods. Detailed ablation and visualization analysis are included to further
illustrate the behavior and insights of our network. The low registration error
and high robustness of our method makes it attractive for substantial
applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results
updated, accepted by ICCV 201
FF-LOGO: Cross-Modality Point Cloud Registration with Feature Filtering and Local to Global Optimization
Cross-modality point cloud registration is confronted with significant
challenges due to inherent differences in modalities between different sensors.
We propose a cross-modality point cloud registration framework FF-LOGO: a
cross-modality point cloud registration method with feature filtering and
local-global optimization. The cross-modality feature correlation filtering
module extracts geometric transformation-invariant features from cross-modality
point clouds and achieves point selection by feature matching. We also
introduce a cross-modality optimization process, including a local adaptive key
region aggregation module and a global modality consistency fusion optimization
module. Experimental results demonstrate that our two-stage optimization
significantly improves the registration accuracy of the feature association and
selection module. Our method achieves a substantial increase in recall rate
compared to the current state-of-the-art methods on the 3DCSR dataset,
improving from 40.59% to 75.74%. Our code will be available at
https://github.com/wangmohan17/FFLOGO.Comment: 7 pages, 2 figure
Morphometric Analysis through 3D Modelling of Bronze Age Stone Moulds from Central Sardinia
Stone moulds were basic elements of metallurgy during the Bronze Age, and their analysis and characterization are very important to improve the knowledge on these artefacts useful for typological characterization. The stone moulds investigated in this study were found during an archaeological field survey in several Nuragic (Bronze Age) settlements in Central Sardinia. Recent studies have shown that photogrammetry can be effectively used for the 3D reconstruction of small and medium‐sized archaeological finds, although there are still many challenges in producing high‐quality digital replicas of ancient artefacts due to their surface complexity and consistency. In this paper, we propose a multidisciplinary approach using mineralogical (X‐ray powder diffraction) and petrographic (thin section) analysis of stone materials, as well as an experimental photogrammetric method for 3D reconstruction from multi‐view images performed with recent software based on the CMPMVS algorithm. The photogrammetric image dataset was carried out using an experimental rig equipped with a 26.2 Mpix full frame digital camera. We also assessed the accuracy of the reconstruction models in order to verify their precision and readability according to archaeological goals. This allowed us to provide an effective tool for more detailed study of the geometric‐dimensional aspects of the moulds. Furthermore, this paper demonstrates the potentialities of an integrated minero‐petrographic and photogrammetric approach for the characterization of small artefacts, providing an effective tool for more in‐depth investigation of future typo-logical comparisons and provenance studies
- …