555 research outputs found

    Non-rigid Reconstruction with a Single Moving RGB-D Camera

    Full text link
    We present a novel non-rigid reconstruction method using a moving RGB-D camera. Current approaches use only non-rigid part of the scene and completely ignore the rigid background. Non-rigid parts often lack sufficient geometric and photometric information for tracking large frame-to-frame motion. Our approach uses camera pose estimated from the rigid background for foreground tracking. This enables robust foreground tracking in situations where large frame-to-frame motion occurs. Moreover, we are proposing a multi-scale deformation graph which improves non-rigid tracking without compromising the quality of the reconstruction. We are also contributing a synthetic dataset which is made publically available for evaluating non-rigid reconstruction methods. The dataset provides frame-by-frame ground truth geometry of the scene, the camera trajectory, and masks for background foreground. Experimental results show that our approach is more robust in handling larger frame-to-frame motions and provides better reconstruction compared to state-of-the-art approaches.Comment: Accepted in International Conference on Pattern Recognition (ICPR 2018

    Skeleton Driven Non-rigid Motion Tracking and 3D Reconstruction

    Full text link
    This paper presents a method which can track and 3D reconstruct the non-rigid surface motion of human performance using a moving RGB-D camera. 3D reconstruction of marker-less human performance is a challenging problem due to the large range of articulated motions and considerable non-rigid deformations. Current approaches use local optimization for tracking. These methods need many iterations to converge and may get stuck in local minima during sudden articulated movements. We propose a puppet model-based tracking approach using skeleton prior, which provides a better initialization for tracking articulated movements. The proposed approach uses an aligned puppet model to estimate correct correspondences for human performance capture. We also contribute a synthetic dataset which provides ground truth locations for frame-by-frame geometry and skeleton joints of human subjects. Experimental results show that our approach is more robust when faced with sudden articulated motions, and provides better 3D reconstruction compared to the existing state-of-the-art approaches.Comment: Accepted in DICTA 201

    Herramienta software para la calibración extrínseca de cámaras infrarrojas y RGBD aplicada a inspección termográfica

    Get PDF
    Context:  Thermographic inspections are currently used to assess energy efficiency in electrical equipment and civil structures or to detect failures in cooling systems and electrical or electronic devices. However, thermal images lack texture details, which does not allow for a precise identification of the geometry of the scene or the objects in it. Method: In this work, the development of the software tool called DepTherm is described. This tool allows performing intrinsic and extrinsic calibration between infrared and RGBD cameras in order to fuse thermal, RGB, and RGBD images, as well as to record thermal and depth data. Additional features include user management, a visualization GUI for all three types of images, database storage, and report generation. Results: In addition to the integration tests performed to validate the functionality of DepTherm, two quantitative tests were conducted in order to evaluate its accuracy. A maximum re-projection error of 1,47±0,64 pixels was found, and the maximum mean error in registering an 11 cm side cube was 4,15 mm. Conclusions: The features of the DepTherm software tool are focused on facilitating thermographic inspections by capturing 3D scene models with thermal data.Contexto: Las inspecciones termográficas se utilizan en la actualidad para evaluar la eficiencia energética de equipos eléctricos y estructuras civiles o para detectar fallas en sistemas de enfriamiento y dispositivos eléctricos o electrónicos. Sin embargo, las imágenes térmicas carecen de detalles de textura, lo cual no permite identificar con precisión la geometría de la escena ni los objetos en ella. Método: En este trabajo se describe el desarrollo de la herramienta de software DepTherm, la cual permite realizar calibraciones intrínsecas y extrínsecas entre cámaras infrarrojas y RGBD para fusionar imágenes térmicas, RGB y RGBD, así como para registrar datos térmicos y de profundidad. Funcionalidades adicionales incluyen el manejo de usuarios, una GUI para visualización de los tres tipos de imágenes, el almacenamiento en una base de datos y la generación de reportes. Resultados: Además de las pruebas de integración para validar la funcionalidad de DepTherm, se realizaron dos pruebas cuantitativas para evaluar su precisión. Se encontró un error máximo de reproyección de 1,47±0,64 pixeles, mientras que el registro de un cubo con 11 cm de lado tuvo un error promedio máximo de 4,147 mm. Conclusiones: Las funcionalidades de la herramienta software DepTherm están enfocadas en facilitar las inspecciones termográficas capturando modelos 3D de las escenas con información térmica

    Thermal-Kinect Fusion Scanning System for Bodyshape Inpainting and Estimation under Clothing

    Get PDF
    In today\u27s interactive world 3D body scanning is necessary in the field of making virtual avatar, apparel industry, physical health assessment and so on. 3D scanners that are used in this process are very costly and also requires subject to be nearly naked or wear a special tight fitting cloths. A cost effective 3D body scanning system which can estimate body parameters under clothing will be the best solution in this regard. In our experiment we build such a body scanning system by fusing Kinect depth sensor and a Thermal camera. Kinect can sense the depth of the subject and create a 3D point cloud out of it. Thermal camera can sense the body heat of a person under clothing. Fusing these two sensors\u27 images could produce a thermal mapped 3D point cloud of the subject and from that body parameters could be estimated even under various cloths. Moreover, this fusion system is also a cost effective one. In our experiment, we introduce a new pipeline for working with our fusion scanning system, and estimate and recover body shape under clothing. We capture Thermal-Kinect fusion images of the subjects with different clothing and produce both full and partial 3D point clouds. To recover the missing parts from our low resolution scan we fit parametric human model on our images and perform boolean operations with our scan data. Further, we measure our final 3D point cloud scan to estimate the body parameters and compare it with the ground truth. We achieve a minimum average error rate of 0.75 cm comparing to other approaches

    Assessing thermal imagery integration into object detection methods on ground-based and air-based collection platforms

    Full text link
    Object detection models commonly deployed on uncrewed aerial systems (UAS) focus on identifying objects in the visible spectrum using Red-Green-Blue (RGB) imagery. However, there is growing interest in fusing RGB with thermal long wave infrared (LWIR) images to increase the performance of object detection machine learning (ML) models. Currently LWIR ML models have received less research attention, especially for both ground- and air-based platforms, leading to a lack of baseline performance metrics evaluating LWIR, RGB and LWIR-RGB fused object detection models. Therefore, this research contributes such quantitative metrics to the literature. The results found that the ground-based blended RGB-LWIR model exhibited superior performance compared to the RGB or LWIR approaches, achieving a mAP of 98.4%. Additionally, the blended RGB-LWIR model was also the only object detection model to work in both day and night conditions, providing superior operational capabilities. This research additionally contributes a novel labelled training dataset of 12,600 images for RGB, LWIR, and RGB-LWIR fused imagery, collected from ground-based and air-based platforms, enabling further multispectral machine-driven object detection research.Comment: 18 pages, 12 figures, 2 table
    • …
    corecore