19,835 research outputs found

    Evaluation of low-cost depth cameras for agricultural applications

    Get PDF
    Low-cost depth-cameras have been used in many agricultural applications with reported advantages of low cost, reliability and speed of measurement. However, some problems were also reported and seem to be technology- related, so understanding the limitations of each type of depth camera technology could provide a basis for technology selection and the development of research involving its use. The cameras use one or a combination of two of the three available technologies: structured light, time-of-flight (ToF), and stereoscopy. The objectives were to evaluate these different technologies for depth sensing, including measuring accuracy and repeatability of distance data and measurements at different positions within the image, and cameras usefulness in indoor and outdoor settings. Then, cameras were tested in a swine facility and in a corn field. Five different cameras were used: (1) Microsoft Kinect v.1, (2) Microsoft Kinect v.2, (3) Intel® RealSenseTM Depth Camera D435, (4) ZED Stereo Camera (StereoLabs), and (5) CamBoard Pico Flexx (PMD Technologies). Results indicate that there were significant camera to camera differences for ZED Stereo Camera and Kinect v.1 camera (p \u3c 0.05). All cameras showed an increase in the standard deviation as the distance between camera and object increased; however, the Intel RealSense camera had a larger increase. Time-of-flight cameras had the smallest error between different sizes of objects. Time-of-flight cameras had non-readable zones on the corners of the images. The results indicate that the ToF technology is the best to be used for indoor applications and stereoscopy is the best technology for outdoor applications

    Evaluation of low-cost depth cameras for agricultural applications

    Get PDF
    Low-cost depth-cameras have been used in many agricultural applications with reported advantages of low cost, reliability and speed of measurement. However, some problems were also reported and seem to be technology related, so understanding the limitations of each type of depth camera technology could provide a basis for technology selection and the development of research involving its use. The cameras use one or a combination of two of the three available technologies: structured light, time-of-flight (ToF), and stereoscopy. The objectives were to evaluate these different technologies for depth sensing, including measuring accuracy and repeatability of distance data and measurements at different positions within the image, and cameras usefulness in indoor and outdoor settings. Then, cameras were tested in a swine facility and in a corn field. Five different cameras were used: (1) Microsoft Kinect v.1, (2) Microsoft Kinect v.2, (3) Intel® RealSense™ Depth Camera D435, (4) ZED Stereo Camera (StereoLabs), and (5) CamBoard Pico Flexx (PMD Technologies). Results indicate that there were significant camera to camera differences for ZED Stereo Camera and Kinect v.1 camera (p \u3c 0.05). All cameras showed an increase in the standard deviation as the distance between camera and object increased; however, the Intel RealSense camera had a larger increase. Time-of-flight cameras had the smallest error between different sizes of objects. Time-of-flight cameras had non-readable zones on the corners of the images. The results indicate that the ToF technolog

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Resolving Multi-path Interference in Time-of-Flight Imaging via Modulation Frequency Diversity and Sparse Regularization

    Get PDF
    Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase shifts of amplitude-modulated signals. For broad illumination or transparent objects, reflections from multiple scene points can illuminate a given pixel, giving rise to an erroneous depth map. We report here a sparsity regularized solution that separates K-interfering components using multiple modulation frequency measurements. The method maps ToF imaging to the general framework of spectral estimation theory and has applications in improving depth profiles and exploiting multiple scattering.Comment: 11 Pages, 4 figures, appeared with minor changes in Optics Letter

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    3D data fusion from multiple sensors and its applications

    Get PDF
    The introduction of depth cameras in the mass market contributed to make computer vision applicable to many real world applications, such as human interaction in virtual environments, autonomous driving, robotics and 3D reconstruction. All these problems were originally tackled by means of standard cameras, but the intrinsic ambiguity in the bidimensional images led to the development of depth cameras technologies. Stereo vision was first introduced to provide an estimate of the 3D geometry of the scene. Structured light depth cameras were developed to use the same concepts of stereo vision but overcome some of the problems of passive technologies. Finally, Time-of-Flight (ToF) depth cameras solve the same depth estimation problem by using a different technology. This thesis focuses on the acquisition of depth data from multiple sensors and presents techniques to efficiently combine the information of different acquisition systems. The three main technologies developed to provide depth estimation are first reviewed, presenting operating principles and practical issues of each family of sensors. The use of multiple sensors then is investigated, providing practical solutions to the problem of 3D reconstruction and gesture recognition. Data from stereo vision systems and ToF depth cameras are combined together to provide a higher quality depth map. A confidence measure of depth data from the two systems is used to guide the depth data fusion. The lack of datasets with data from multiple sensors is addressed by proposing a system for the collection of data and ground truth depth, and a tool to generate synthetic data from standard cameras and ToF depth cameras. For gesture recognition, a depth camera is paired with a Leap Motion device to boost the performance of the recognition task. A set of features from the two devices is used in a classification framework based on Support Vector Machines and Random Forests

    Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras

    Get PDF
    Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., highly accurate 3D environment reconstruction and mapping, high precision object recognition and localization, ...). In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the ROS robotics framework. We report detailed experimental validations and performance comparisons to support our statements
    • …
    corecore