8 research outputs found

    Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction

    Get PDF
    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole

    SPIRA: an automatic system to support lower limb injury assessment

    No full text
    Lower limb injuries, especially those related to the knee joint, are some of the most common and severe injuries among sport practitioners. Consequently, a growing interest in the identification of subjects with high risk of injury has emerged during last years. One of the most commonly used injury risk factor is the measurement of joint angles during the execution of dynamic movements. To that end, techniques such as human motion capture and video analysis have been widely used. However, traditional procedures to measure joint angles present certain limitations, which makes this practice not practical in common clinical settings. This work presents SPIRA, a novel 2D video analysis system directed to support practitioners during the evaluation of joint angles in functional tests. The system employs an infrared camera to track retro-reflective markers attached to the patient’s body joints and provide a real-time measurement of the joint angles in a cost-and-time-effective way. The information gathered by the sensor is processed and managed through a computer application that guides the expert during the execution of the tests and expedites the analysis of the results. In order to show the potential of the SPIRA system, a case study has been conducted, performing the analysis with the both the proposed system and a gold-standard in 2D offline video analysis. The results (ICC(ρ) = 0.996) reveal a good agreement between both tools and prove the reliability of SPIRA

    RGB-D Sensors Data Quality Assessment and Improvement for Advanced Applications

    No full text
    Since the advent of the first Kinect as a motion control device for the Microsoft XBOX platform (November 2010), several similar active and low-cost range sensing devices, capable of capturing a digital RGB image and the corresponding depth map (RGBD), have been introduced in the market. Although initially designed for the video gaming market with the scope of capturing an approximated 3D image of a human body in order to create gesture-based interfaces, RGBD sensors’ low cost and their ability to gather streams of 3D data in real-time with a frame rate of 15 to 30 fps, boosted their popularity for several other purposes, including 3D multimedia interaction, robot navigation, 3D body scanning for garment design and proximity sensors for automotive design. However, data quality is not the RGBD sensors’ strong point, and additional considerations are needed for maximizing the amount of information that can be extracted by the raw data, together with proper criteria for data validation and verification. The present chapter provides an overview of RGBD sensors technology and an analysis of how random and systematic 3D measurement errors affect the global 3D data quality in the various technological implementations. Typical applications are also reported, with the aim of providing readers with the basic knowledge and understanding of the potentialities and challenges of this technology

    Dealing with Missing Depth: Recent Advances in Depth Image Completion and Estimation

    Get PDF
    Even though obtaining 3D information has received significant attention in scene capture systems in recent years, there are currently numerous challenges within scene depth estimation which is one of the fundamental parts of any 3D vision system focusing on RGB-D images. This has lead to the creation of an area of research where the goal is to complete the missing 3D information post capture. In many downstream applications, incomplete scene depth is of limited value, and thus, techniques are required to fill the holes that exist in terms of both missing depth and colour scene information. An analogous problem exists within the scope of scene filling post object removal in the same context. Although considerable research has resulted in notable progress in the synthetic expansion or reconstruction of missing colour scene information in both statistical and structural forms, work on the plausible completion of missing scene depth is contrastingly limited. Furthermore, recent advances in machine learning using deep neural networks have enabled complete depth estimation in a monocular or stereo framework circumnavigating the need for any completion post-processing, hence increasing both efficiency and functionality. In this chapter, a brief overview of the advances in the state-of-the-art approaches within RGB-D completion is presented while noting related solutions in the space of traditional texture synthesis and colour image completion for hole filling. Recent advances in employing learning-based techniques for this and related depth estimation tasks are also explored and presented
    corecore