5,566 research outputs found
Probabilistic ToF and Stereo Data Fusion Based on Mixed Pixel Measurement Models
This paper proposes a method for fusing data acquired by a ToF camera and a stereo pair based on a model for depth measurement by ToF cameras which accounts also for depth discontinuity artifacts due to the mixed pixel effect. Such model is exploited within both a ML and a MAP-MRF frameworks for ToF and stereo data fusion. The proposed MAP-MRF framework is characterized by site-dependent range values, a rather important feature since it can be used both to improve the accuracy and to decrease the computational complexity of standard MAP-MRF approaches. This paper, in order to optimize the site dependent global cost function characteristic of the proposed MAP-MRF approach, also introduces an extension to Loopy Belief Propagation which can be used in other contexts. Experimental data validate the proposed ToF measurements model and the effectiveness of the proposed fusion techniques
Fast Multi-frame Stereo Scene Flow with Motion Segmentation
We propose a new multi-frame method for efficiently computing scene flow
(dense depth and optical flow) and camera ego-motion for a dynamic scene
observed from a moving stereo camera rig. Our technique also segments out
moving objects from the rigid scene. In our method, we first estimate the
disparity map and the 6-DOF camera motion using stereo matching and visual
odometry. We then identify regions inconsistent with the estimated camera
motion and compute per-pixel optical flow only at these regions. This flow
proposal is fused with the camera motion-based flow proposal using fusion moves
to obtain the final optical flow and motion segmentation. This unified
framework benefits all four tasks - stereo, optical flow, visual odometry and
motion segmentation leading to overall higher accuracy and efficiency. Our
method is currently ranked third on the KITTI 2015 scene flow benchmark.
Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3
orders of magnitude faster than the top six methods. We also report a thorough
evaluation on challenging Sintel sequences with fast camera and object motion,
where our method consistently outperforms OSF [Menze and Geiger, 2015], which
is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo
Scene Flow Benchmark in November 201
High-Precision Fruit Localization Using Active Laser-Camera Scanning: Robust Laser Line Extraction for 2D-3D Transformation
Recent advancements in deep learning-based approaches have led to remarkable
progress in fruit detection, enabling robust fruit identification in complex
environments. However, much less progress has been made on fruit 3D
localization, which is equally crucial for robotic harvesting. Complex fruit
shape/orientation, fruit clustering, varying lighting conditions, and
occlusions by leaves and branches have greatly restricted existing sensors from
achieving accurate fruit localization in the natural orchard environment. In
this paper, we report on the design of a novel localization technique, called
Active Laser-Camera Scanning (ALACS), to achieve accurate and robust fruit 3D
localization. The ALACS hardware setup comprises a red line laser, an RGB color
camera, a linear motion slide, and an external RGB-D camera. Leveraging the
principles of dynamic-targeting laser-triangulation, ALACS enables precise
transformation of the projected 2D laser line from the surface of apples to the
3D positions. To facilitate laser pattern acquisitions, a Laser Line Extraction
(LLE) method is proposed for robust and high-precision feature extraction on
apples. Comprehensive evaluations of LLE demonstrated its ability to extract
precise patterns under variable lighting and occlusion conditions. The ALACS
system achieved average apple localization accuracies of 6.9 11.2 mm at
distances ranging from 1.0 m to 1.6 m, compared to 21.5 mm by a commercial
RealSense RGB-D camera, in an indoor experiment. Orchard evaluations
demonstrated that ALACS has achieved a 95% fruit detachment rate versus a 71%
rate by the RealSense camera. By overcoming the challenges of apple 3D
localization, this research contributes to the advancement of robotic fruit
harvesting technology
Recommended from our members
High-quality dense stereo vision for whole body imaging and obesity assessment
textThe prevalence of obesity has necessitated developing safe and convenient tools for timely assessing and monitoring this condition for a broad range of population. Three-dimensional (3D) body imaging has become a new mean for obesity assessment. Moreover, it generates body shape information that is meaningful for fitness, ergonomics, and personalized clothing. In the previous work of our lab, we developed a prototype active stereo vision system that demonstrated a potential to fulfill this goal. But the prototype required four computer projectors to cast artificial textures on the body which facilitate the stereo-matching on texture-deficient images (e.g., skin). This decreases the mobility of the system when used to collect a large population data. In addition, the resolution of the generated 3D~images is limited by both cameras and projectors available during the project. The study reported in this dissertation highlights our continued effort in improving the capability of 3Dbody imaging through simplified hardware for passive stereo and advanced computation techniques.
The system utilizes high-resolution single-lens reflex (SLR) cameras, which became widely available lately, and is configured in a two-stance design to image the front and back surfaces of a person. A total of eight cameras are used to form four pairs of stereo units. Each unit covers a quarter of the body surface. The stereo units are individually calibrated with a specific pattern to determine cameras' intrinsic and extrinsic parameters for stereo matching. The global orientation and position of each stereo unit within a common world coordinate system is calculated through a 3Dregistration step. The stereo calibration and 3Dregistration procedures do not need to be repeated for a deployed system if the cameras' relative positions have not changed. This property contributes to the portability of the system, and tremendously alleviates the maintenance task. The image acquisition time is around two seconds for a whole-body capture. The system works in an indoor environment with a moderate ambient light.
Advanced stereo computation algorithms are developed by taking advantage of high-resolution images and by tackling the ambiguity problem in stereo matching. A multi-scale, coarse-to-fine matching framework is proposed to match large-scale textures at a low resolution and refine the matched results over higher resolutions. This matching strategy reduces the complexity of the computation and avoids ambiguous matching at the native resolution. The pixel-to-pixel stereo matching algorithm follows a classic, four-step strategy which consists of matching cost computation, cost aggregation, disparity computation and disparity refinement.
The system performance has been evaluated on mannequins and human subjects in comparison with other measurement methods. It was found that the geometrical measurements from reconstructed 3Dbody models, including body circumferences and whole volume, are highly repeatable and consistent with manual and other instrumental measurements (CV 0.99). The agreement of percent body fat (%BF) estimation on human subjects between stereo and dual-energy X-ray absorptiometry (DEXA) was found to be improved over the previous active stereo system, and the limits of agreement with 95% confidence were reduced by half. Our achieved %BF estimation agreement is among the lowest ones of other comparative studies with commercialized air displacement plethysmography (ADP) and DEXA. In practice, %BF estimation through a two-component model is sensitive to body volume measurement, and the estimation of lung volume could be a source of variation. Protocols for this type of measurement should still be created with an awareness of this factor.Biomedical Engineerin
Going beyond semantic image segmentation, towards holistic scene understanding, with associative hierarchical random fields
In this thesis we exploit the generality and expressive power of the Associative Hierarchical
Random Field (AHRF) graphical model to take its use beyond that of semantic image segmentation,
into object-classes, towards a framework for holistic scene understanding. We provide a
working definition for the holistic approach to scene understanding, which allows for the integration
of existing, disparate, applications into an unifying ensemble. We believe that modelling
such an ensemble as an AHRF is both a principled and pragmatic solution. We present a hierarchy
that shows several methods for fusing applications together with the AHRF graphical model.
Each of the three; feature, potential and energy, layers subsumes its predecessor in generality
and together give rise to many options for integration. With applications on street scenes we
demonstrate an implementation of each layer. The first layer application joins appearance and
geometric features. For our second layer we implement a things and stuff co-junction using
higher order AHRF potentials for object detectors, with the goal of answering the classic questions:
What? Where? and How many? A holistic approach to recognition-and-reconstruction
is realised within our third layer by linking two energy based formulations of both applications.
Each application is evaluated qualitatively and quantitatively. In all cases our holistic approach
shows improvement over baseline methods
- …