14,090 research outputs found
Structured Light-Based 3D Reconstruction System for Plants.
Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in
computer vision. Most approaches recover the depth at each pixel based on the
focal setting which exhibits maximal sharpness. Yet, it is not obvious how to
reliably estimate the sharpness level, particularly in low-textured areas. In
this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end
learning approach to this problem. One of the main challenges we face is the
hunger for data of deep neural networks. In order to obtain a significant
amount of focal stacks with corresponding groundtruth depth, we propose to
leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us
to digitally create focal stacks of varying sizes. Compared to existing
benchmarks our dataset is 25 times larger, enabling the use of machine learning
for this inverse problem. We compare our results with state-of-the-art DFF
methods and we also analyze the effect of several key deep architectural
components. These experiments show that our proposed method `DDFFNet' achieves
state-of-the-art performance in all scenes, reducing depth error by more than
75% compared to the classical DFF methods.Comment: accepted to Asian Conference on Computer Vision (ACCV) 201
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
We contribute a dense SLAM system that takes a live stream of depth images as
input and reconstructs non-rigid deforming scenes in real time, without
templates or prior models. In contrast to existing approaches, we do not
maintain any volumetric data structures, such as truncated signed distance
function (TSDF) fields or deformation fields, which are performance and memory
intensive. Our system works with a flat point (surfel) based representation of
geometry, which can be directly acquired from commodity depth sensors. Standard
graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for
all central operations: i.e., nearest neighbor maintenance, non-rigid
deformation field estimation and fusion of depth measurements. Our pipeline
inherently avoids expensive volumetric operations such as marching cubes,
volumetric fusion and dense deformation field update, leading to significantly
improved performance. Furthermore, the explicit and flexible surfel based
geometry representation enables efficient tackling of topology changes and
tracking failures, which makes our reconstructions consistent with updated
depth observations. Our system allows robots to maintain a scene description
with non-rigidly deformed objects that potentially enables interactions with
dynamic working environments.Comment: RSS 2018. The video and source code are available on
https://sites.google.com/view/surfelwarp/hom
FastDepth: Fast Monocular Depth Estimation on Embedded Systems
Depth sensing is a critical function for robotic tasks such as localization,
mapping and obstacle detection. There has been a significant and growing
interest in depth estimation from a single RGB image, due to the relatively low
cost and size of monocular cameras. However, state-of-the-art single-view depth
estimation algorithms are based on fairly complex deep neural networks that are
too slow for real-time inference on an embedded platform, for instance, mounted
on a micro aerial vehicle. In this paper, we address the problem of fast depth
estimation on embedded systems. We propose an efficient and lightweight
encoder-decoder network architecture and apply network pruning to further
reduce computational complexity and latency. In particular, we focus on the
design of a low-latency decoder. Our methodology demonstrates that it is
possible to achieve similar accuracy as prior work on depth estimation, but at
inference speeds that are an order of magnitude faster. Our proposed network,
FastDepth, runs at 178 fps on an NVIDIA Jetson TX2 GPU and at 27 fps when using
only the TX2 CPU, with active power consumption under 10 W. FastDepth achieves
close to state-of-the-art accuracy on the NYU Depth v2 dataset. To the best of
the authors' knowledge, this paper demonstrates real-time monocular depth
estimation using a deep neural network with the lowest latency and highest
throughput on an embedded platform that can be carried by a micro aerial
vehicle.Comment: Accepted for presentation at ICRA 2019. 8 pages, 6 figures, 7 table
Indoor assistance for visually impaired people using a RGB-D camera
In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz
Depth Estimation Through a Generative Model of Light Field Synthesis
Light field photography captures rich structural information that may
facilitate a number of traditional image processing and computer vision tasks.
A crucial ingredient in such endeavors is accurate depth recovery. We present a
novel framework that allows the recovery of a high quality continuous depth map
from light field data. To this end we propose a generative model of a light
field that is fully parametrized by its corresponding depth map. The model
allows for the integration of powerful regularization techniques such as a
non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201
A Primal-Dual Framework for Real-Time Dense RGB-D Scene Flow
This paper presents the first method to compute
dense scene flow in real-time for RGB-D cameras. It is based on a variational formulation where brightness constancy and geometric consistency are imposed. Accounting for the depth data provided by RGB-D cameras, regularization of the flow field is imposed on the 3D surface (or set of surfaces) of the observed scene instead of on the image plane, leading to more geometrically consistent results. The minimization problem is efficiently solved by a primal-dual algorithm which is implemented on a GPU, achieving a previously unseen temporal performance. Several tests have been conducted to compare our approach with a state-of-the-art work (RGB-D flow) where
quantitative and qualitative results are evaluated. Moreover, an additional set of experiments have been carried out to show the applicability of our work to estimate motion in realtime. Results demonstrate the accuracy of our approach, which outperforms the RGB-D flow, and which is able to estimate heterogeneous and non-rigid motions at a high frame rate.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Research supported by the Spanish Government under project DPI1011-25483 and the Spanish grant program FPI-MICINN 2012
- …