30,378 research outputs found

    Incremental Dense Reconstruction from Monocular Video with Guided Sparse Feature Volume Fusion

    Full text link
    Incrementally recovering 3D dense structures from monocular videos is of paramount importance since it enables various robotics and AR applications. Feature volumes have recently been shown to enable efficient and accurate incremental dense reconstruction without the need to first estimate depth, but they are not able to achieve as high of a resolution as depth-based methods due to the large memory consumption of high-resolution feature volumes. This letter proposes a real-time feature volume-based dense reconstruction method that predicts TSDF (Truncated Signed Distance Function) values from a novel sparsified deep feature volume, which is able to achieve higher resolutions than previous feature volume-based methods, and is favorable in large-scale outdoor scenarios where the majority of voxels are empty. An uncertainty-aware multi-view stereo (MVS) network is leveraged to infer initial voxel locations of the physical surface in a sparse feature volume. Then for refining the recovered 3D geometry, deep features are attentively aggregated from multiview images at potential surface locations, and temporally fused. Besides achieving higher resolutions than before, our method is shown to produce more complete reconstructions with finer detail in many cases. Extensive evaluations on both public and self-collected datasets demonstrate a very competitive real-time reconstruction result for our method compared to state-of-the-art reconstruction methods in both indoor and outdoor settings.Comment: 8 pages, 5 figures, RA-L 202

    CVRecon: Rethinking 3D Geometric Feature Learning For Neural Reconstruction

    Full text link
    Recent advances in neural reconstruction using posed image sequences have made remarkable progress. However, due to the lack of depth information, existing volumetric-based techniques simply duplicate 2D image features of the object surface along the entire camera ray. We contend this duplication introduces noise in empty and occluded spaces, posing challenges for producing high-quality 3D geometry. Drawing inspiration from traditional multi-view stereo methods, we propose an end-to-end 3D neural reconstruction framework CVRecon, designed to exploit the rich geometric embedding in the cost volumes to facilitate 3D geometric feature learning. Furthermore, we present Ray-contextual Compensated Cost Volume (RCCV), a novel 3D geometric feature representation that encodes view-dependent information with improved integrity and robustness. Through comprehensive experiments, we demonstrate that our approach significantly improves the reconstruction quality in various metrics and recovers clear fine details of the 3D geometries. Our extensive ablation studies provide insights into the development of effective 3D geometric feature learning schemes. Project page: https://cvrecon.ziyue.cool

    3D reconstruction of particle agglomerates using multiple scanning electron microscope stereo-pair images

    Get PDF
    Scanning electron microscopes (SEM) allow a detailed surface analysis of a wide variety of specimen. However, SEM image data does not provide depth information about a captured scene. This limitation can be overcome by recovering the hidden third dimension of the acquired SEM micrographs, for instance to fully characterize a particle agglomerate's morphology. In this paper, we present a method that allows the three-dimensional (3D) reconstruction of investigated particle agglomerates using an uncalibrated stereo vision approach that is applied to multiple stereo-pair images. The reconstruction scheme starts with a feature detection and subsequent matching in each pair of stereo images. Based on these correspondences, a robust estimate of the epipolar geometry is determined. A following rectification allows a reduction of the dense correspondence problem to a one-dimensional search along conjugate epipolar lines. So the disparity maps can be obtained using a dense stereo matching algorithm. To remove outliers while preserving edges and individual structures, a disparity refinement is executed using suitable image filtering techniques. The investigated specimen's qualitative depth's information can be directly calculated from the determined disparity maps. In a final step the resulting point clouds are registered. State-of-the-art algorithms for 3D reconstruction of SEM micrographs mainly focus on structures whose image pairs contain hardly or even none-occluded areas. The acquisition of multiple stereo-pair images from different perspectives makes it possible to combine the obtained point clouds in order to overcome occurring occlusions. The presented approach thereby enables the 3D illustration of the investigated particle agglomerates. © 2018 SPIE

    Analysis of mobile laser scanning data and multi-view image reconstruction

    Get PDF
    The combination of laser scanning (LS, active, direct 3D measurement of the object surface) and photogrammetry (high geometric and radiometric resolution) is widely applied for object reconstruction (e.g. architecture, topography, monitoring, archaeology). Usually the results are a coloured point cloud or a textured mesh. The geometry is typically generated from the laser scanning point cloud and the radiometric information is the result of image acquisition. In the last years, next to significant developments in static (terrestrial LS) and kinematic LS (airborne and mobile LS) hardware and software, research in computer vision and photogrammetry lead to advanced automated procedures in image orientation and image matching. These methods allow a highly automated generation of 3D geometry just based on image data. Founded on advanced feature detector techniques (like SIFT (Scale Invariant Feature Transform)) very robust techniques for image orientation were established (cf. Bundler). In a subsequent step, dense multi-view stereo reconstruction algorithms allow the generation of very dense 3D point clouds that represent the scene geometry (cf. Patch-based Multi-View Stereo (PMVS2)). Within this paper the usage of mobile laser scanning (MLS) and simultaneously acquired image data for an advanced integrated scene reconstruction is studied. For the analysis the geometry of a scene is generated by both techniques independently. Then, the paper focuses on the quality assessment of both techniques. This includes a quality analysis of the individual surface models and a comparison of the direct georeferencing of the images using positional and orientation data of the on board GNSS-INS system and the indirect georeferencing of the imagery by automatic image orientation. For the practical evaluation a dataset from an archaeological monument is utilised. Based on the gained knowledge a discussion of the results is provided and a future strategy for the integration of both techniques is proposed

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System

    Get PDF
    Az http://intechweb.org/ alatti "Books" fül alatt kell rákeresni a "Stereo Vision" címre és az 1. fejezetre

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    • …
    corecore