1,299 research outputs found

    Joint kinect and multiple external cameras simultaneous calibration

    Get PDF

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Creating Simplified 3D Models with High Quality Textures

    Get PDF
    This paper presents an extension to the KinectFusion algorithm which allows creating simplified 3D models with high quality RGB textures. This is achieved through (i) creating model textures using images from an HD RGB camera that is calibrated with Kinect depth camera, (ii) using a modified scheme to update model textures in an asymmetrical colour volume that contains a higher number of voxels than that of the geometry volume, (iii) simplifying dense polygon mesh model using quadric-based mesh decimation algorithm, and (iv) creating and mapping 2D textures to every polygon in the output 3D model. The proposed method is implemented in real-time by means of GPU parallel processing. Visualization via ray casting of both geometry and colour volumes provides users with a real-time feedback of the currently scanned 3D model. Experimental results show that the proposed method is capable of keeping the model texture quality even for a heavily decimated model and that, when reconstructing small objects, photorealistic RGB textures can still be reconstructed.Comment: 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Page 1 -

    Calibration of Kinect-type RGB-D sensors for robotic applications

    Get PDF
    The paper presents a calibration model suitable for software-based calibration of Kinect-type RGB-D sensors. Additionally, it describes a two-step calibration procedure assuming a use of only a simple checkerboard pattern. Finally, the paper presents a calibration case study, showing that the calibration may improve sensor accuracy 3 to 5 times, depending on the anticipated use of the sensor. The results obtained in this study using calibration models of different levels of complexity reveal that depth measurement correction is an important component of calibration as it may reduce by 50% the errors in sensor reading

    Can shoulder range of movement be measured accurately using the Microsoft Kinect sensor plus Medical Interactive Recovery Assistant (MIRA) software?

    Get PDF
    BackgroundThis study compared the accuracy of measuring shoulder range of movement (ROM) with a simple laptop-sensor combination vs. trained observers (shoulder physiotherapists and shoulder surgeons) using motion capture (MoCap) laboratory equipment as the gold standard. MethodsThe Microsoft Kinect sensor (Microsoft Corp., Redmond, WA, USA) tracks 3-dimensional human motion. Ordinarily used with an Xbox (Microsoft Corp.) video game console, Medical Interactive Recovery Assistant (MIRA) software (MIRA Rehab Ltd., London, UK) allows this small sensor to measure shoulder movement with a standard computer. Shoulder movements of 49 healthy volunteers were simultaneously measured by trained observers, MoCap, and the MIRA device. Internal rotation was assessed with the shoulder abducted 90° and external rotation with the shoulder adducted. Visual estimation and MIRA measurements were compared with gold standard MoCap measurements for agreement using Bland-Altman methods. Results There were 1670 measurements analyzed. The MIRA evaluations of all 4 cardinal shoulder movements were significantly more precise, with narrower limits of agreement, than the measurements of trained observers. MIRA achieved ±11° (95% confidence interval [CI], 8.7°-12.6°) for forward flexion vs. ±16° (95% CI, 14.6°-17.6°) by trained observers. For abduction, MIRA showed ±11° (95% CI, 8.7°-12.8°) against ±15° (95% CI, 13.4°-16.2°) for trained observers. MIRA attained ±10° (95% CI, 8.1°-11.9°) during external rotation measurement, whereas trained observers only reached ±21° (95% CI, 18.7°-22.6°). For internal rotation, MIRA achieved ±9° (95% CI, 7.2°-10.4°), which was again better than TOs at ±18° (95% CI, 16.0°-19.3°). ConclusionsA laptop combined with a Microsoft Kinect sensor and the MIRA software can measure shoulder movements with acceptable levels of accuracy. This technology, which can be easily set up, may also allow precise shoulder ROM measurement outside the clinic setting

    Can shoulder range of movement be measured accurately using the Microsoft Kinect sensor plus Medical Interactive Recovery Assistant software?

    Get PDF
    © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Background: This study compared the accuracy of measuring shoulder range of movement (ROM) with a simple laptop-sensor combination vs. trained observers (shoulder physiotherapists and shoulder surgeons) using motion capture (MoCap) laboratory equipment as the gold standard. Methods: The Microsoft Kinect sensor (Microsoft Corp., Redmond, WA, USA) tracks 3-dimensional human motion. Ordinarily used with an Xbox (Microsoft Corp.) video game console, Medical Interactive Recovery Assistant (MIRA) software (MIRA Rehab Ltd., London, UK) allows this small sensor to measure shoulder movement with a standard computer. Shoulder movements of 49 healthy volunteers were simultaneously measured by trained observers, MoCap, and the MIRA device. Internal rotation was assessed with the shoulder abducted 90° and external rotation with the shoulder adducted. Visual estimation and MIRA measurements were compared with gold standard MoCap measurements for agreement using Bland-Altman methods. Results: There were 1670 measurements analyzed. The MIRA evaluations of all 4 cardinal shoulder movements were significantly more precise, with narrower limits of agreement, than the measurements of trained observers. MIRA achieved ±11° (95% confidence interval [CI], 8.7°-12.6°) for forward flexion vs. ±16° (95% CI, 14.6°-17.6°) by trained observers. For abduction, MIRA showed ±11° (95% CI, 8.7°-12.8°) against ±15° (95% CI, 13.4°-16.2°) for trained observers. MIRA attained ±10° (95% CI, 8.1°-11.9°) during external rotation measurement, whereas trained observers only reached ±21° (95% CI, 18.7°-22.6°). For internal rotation, MIRA achieved ±9° (95% CI, 7.2°-10.4°), which was again better than TOs at ±18° (95% CI, 16.0°-19.3°). Conclusions: A laptop combined with a Microsoft Kinect sensor and the MIRA software can measure shoulder movements with acceptable levels of accuracy. This technology, which can be easily set up, may also allow precise shoulder ROM measurement outside the clinic setting
    corecore