6,044 research outputs found

    Advantages of 3D time-of-flight range imaging cameras in machine vision applications

    Get PDF
    Machine vision using image processing of traditional intensity images is in wide spread use. In many situations environmental conditions or object colours or shades cannot be controlled, leading to difficulties in correctly processing the images and requiring complicated processing algorithms. Many of these complications can be avoided by using range image data, instead of intensity data. This is because range image data represents the physical properties of object location and shape, practically independently of object colour or shading. The advantages of range image processing are presented, along with three example applications that show how robust machine vision results can be obtained with relatively simple range image processing in real-time applications

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Dial It In: Rotating RF Sensors to Enhance Radio Tomography

    Full text link
    A radio tomographic imaging (RTI) system uses the received signal strength (RSS) measured by RF sensors in a static wireless network to localize people in the deployment area, without having them to carry or wear an electronic device. This paper addresses the fact that small-scale changes in the position and orientation of the antenna of each RF sensor can dramatically affect imaging and localization performance of an RTI system. However, the best placement for a sensor is unknown at the time of deployment. Improving performance in a deployed RTI system requires the deployer to iteratively "guess-and-retest", i.e., pick a sensor to move and then re-run a calibration experiment to determine if the localization performance had improved or degraded. We present an RTI system of servo-nodes, RF sensors equipped with servo motors which autonomously "dial it in", i.e., change position and orientation to optimize the RSS on links of the network. By doing so, the localization accuracy of the RTI system is quickly improved, without requiring any calibration experiment from the deployer. Experiments conducted in three indoor environments demonstrate that the servo-nodes system reduces localization error on average by 32% compared to a standard RTI system composed of static RF sensors.Comment: 9 page

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Exploitation of time-of-flight (ToF) cameras

    Get PDF
    This technical report reviews the state-of-the art in the field of ToF cameras, their advantages, their limitations, and their present-day applications sometimes in combination with other sensors. Even though ToF cameras provide neither higher resolution nor larger ambiguity-free range compared to other range map estimation systems, advantages such as registered depth and intensity data at a high frame rate, compact design, low weight and reduced power consumption have motivated their use in numerous areas of research. In robotics, these areas range from mobile robot navigation and map building to vision-based human motion capture and gesture recognition, showing particularly a great potential in object modeling and recognition.Preprin

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU
    corecore