619 research outputs found

    Microdeflectometry - a novel tool to acquire 3D microtopography with nanometer height resolution

    Full text link
    We introduce "microdeflectometry", a novel technique for measuring the microtopography of specular surfaces. The primary data is the local slope of the surface under test. Measuring the slope instead of the height implies high information efficiency and extreme sensitivity to local shape irregularities. The lateral resolution can be better than one micron whereas the resulting height resolution is in the range of one nanometer. Microdeflectometry can be supplemented by methods to expand the depth of field, with the potential to provide quantitative 3D imaging with SEM-like features.Comment: 3 pages, 11 figures, latex, zip-file, accepted for publication at Optics Letter

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU

    Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    Get PDF
    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera

    Real-time processing of depth and color video streams to improve the reliability of depth maps

    Full text link
    Depth is a useful information in vision to understand the geometrical properties of an environment. Depth is traditionally computed in terms of a disparity map acquired by a stereoscopic system but, over the last few years, several manufacturers have released single-lens cameras that directly capture depth information (also called range). This is an important technological breakthrough although range signals remain difficult to handle in practice, due to many reasons (low resolution, noise, low framerate, . . . ). Practitioners still struggle to use range data in their applications. The purpose of this paper is to give a brief introduction to range data (captured with a camera), discuss common limitations, and propose techniques to cope with difficulties typically encountered with range cameras. These techniques are based on a simultaneous view of the scene by a color and a depth camera that are combined to improve their interpretation in real time

    Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar

    Get PDF
    Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
    corecore