1,020 research outputs found
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Joint Reconstruction of Multi-view Compressed Images
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER
SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical Refinement and EM optimization
In this paper, we introduce Segmentation-Driven Deformation Multi-View Stereo
(SD-MVS), a method that can effectively tackle challenges in 3D reconstruction
of textureless areas. We are the first to adopt the Segment Anything Model
(SAM) to distinguish semantic instances in scenes and further leverage these
constraints for pixelwise patch deformation on both matching cost and
propagation. Concurrently, we propose a unique refinement strategy that
combines spherical coordinates and gradient descent on normals and pixelwise
search interval on depths, significantly improving the completeness of
reconstructed 3D model. Furthermore, we adopt the Expectation-Maximization (EM)
algorithm to alternately optimize the aggregate matching cost and
hyperparameters, effectively mitigating the problem of parameters being
excessively dependent on empirical tuning. Evaluations on the ETH3D
high-resolution multi-view stereo benchmark and the Tanks and Temples dataset
demonstrate that our method can achieve state-of-the-art results with less time
consumption.Comment: 10 pages, 9 figures, published to AAAI202
- …