23,002 research outputs found
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Joint segmentation and classification of retinal arteries/veins from fundus images
Objective Automatic artery/vein (A/V) segmentation from fundus images is
required to track blood vessel changes occurring with many pathologies
including retinopathy and cardiovascular pathologies. One of the clinical
measures that quantifies vessel changes is the arterio-venous ratio (AVR) which
represents the ratio between artery and vein diameters. This measure
significantly depends on the accuracy of vessel segmentation and classification
into arteries and veins. This paper proposes a fast, novel method for semantic
A/V segmentation combining deep learning and graph propagation.
Methods A convolutional neural network (CNN) is proposed to jointly segment
and classify vessels into arteries and veins. The initial CNN labeling is
propagated through a graph representation of the retinal vasculature, whose
nodes are defined as the vessel branches and edges are weighted by the cost of
linking pairs of branches. To efficiently propagate the labels, the graph is
simplified into its minimum spanning tree.
Results The method achieves an accuracy of 94.8% for vessels segmentation.
The A/V classification achieves a specificity of 92.9% with a sensitivity of
93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and
sensitivity, both of 91.7%.
Conclusion The results show that our method outperforms the leading previous
works on a public dataset for A/V classification and is by far the fastest.
Significance The proposed global AVR calculated on the whole fundus image
using our automatic A/V segmentation method can better track vessel changes
associated to diabetic retinopathy than the standard local AVR calculated only
around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin
J-MOD: Joint Monocular Obstacle Detection and Depth Estimation
In this work, we propose an end-to-end deep architecture that jointly learns
to detect obstacles and estimate their depth for MAV flight applications. Most
of the existing approaches either rely on Visual SLAM systems or on depth
estimation models to build 3D maps and detect obstacles. However, for the task
of avoiding obstacles this level of complexity is not required. Recent works
have proposed multi task architectures to both perform scene understanding and
depth estimation. We follow their track and propose a specific architecture to
jointly estimate depth and obstacles, without the need to compute a global map,
but maintaining compatibility with a global SLAM system if needed. The network
architecture is devised to exploit the joint information of the obstacle
detection task, that produces more reliable bounding boxes, with the depth
estimation one, increasing the robustness of both to scenario changes. We call
this architecture J-MOD. We test the effectiveness of our approach with
experiments on sequences with different appearance and focal lengths and
compare it to SotA multi task methods that jointly perform semantic
segmentation and depth estimation. In addition, we show the integration in a
full system using a set of simulated navigation experiments where a MAV
explores an unknown scenario and plans safe trajectories by using our detection
model
- …