66 research outputs found

    Dense RGB-D SLAM and object localisation for robotics and industrial applications

    Get PDF
    Dense reconstruction and object localisation are two critical steps in robotic and industrial applications. The former entails a joint estimation of camera egomotion and the structure of the surrounding environment, also known as Simultaneous Localisation and Mapping (SLAM), and the latter aims to locate the object in the reconstructed scenes. This thesis addresses the challenges of dense SLAM with RGB-D cameras and object localisation towards robotic and industrial applications. Camera drift is an essential issue in camera egomotion estimation. Due to the accumulated error in camera pose estimation, the estimated camera trajectory is inaccurate, and the reconstruction of the environment is inconsistent. This thesis analyses camera drift in SLAM under the probabilistic inference framework and proposes an online map fusion strategy with standard deviation estimation based on frame-to-model camera tracking. The camera pose is estimated by aligning the input image with the global map model, and the global map merges the information in the images by weighted fusion with standard deviation modelling. In addition, a pre-screening step is applied before map fusion to preclude the adverse effect of accumulated errors and noises on camera egomotion estimation. Experimental results indicated that the proposed method mitigates camera drift and improves the global consistency of camera trajectories. Another critical challenge for dense RGB-D SLAM in industrial scenarios is to handle mechanical and plastic components that usually have reflective and shiny surfaces. Photometric alignment in frame-to-model camera tracking tends to fail on such objects due to the inconsistency in intensity patterns of the images and the global map model. This thesis addresses this problem and proposes RSO-SLAM, namely a SLAM approach to reflective and shiny object reconstruction. RSO-SLAM adopts frame-to-model camera tracking and combines local photometric alignment and global geometric registration. This study revealed the effectiveness and excellent performance of the proposed RSO-SLAM on both plastic and metallic objects. In addition, a case study involving the cover of a electric vehicle battery with metallic surface demonstrated the superior performance of the RSO-SLAM approach in the reconstruction of a common industrial product. With the reconstructed point cloud model of the object, the problem of object localisation is tackled as point cloud registration in the thesis. Iterative Closest Point (ICP) is arguably the best-known method for point cloud registration, but it is susceptible to sub-optimal convergence due to the multimodal solution space. This thesis proposes the Bees Algorithm (BA) enhanced with the Singular Value Decomposition (SVD) procedure for point cloud registration. SVD accelerates the speed of the local search of the BA, helping the algorithm to rapidly identify the local optima. It also enhances the precision of the obtained solutions. At the same time, the global outlook of the BA ensures adequate exploration of the whole solution space. Experimental results demonstrated the remarkable performance of the SVD-enhanced BA in terms of consistency and precision. Additional tests on noisy datasets demonstrated the robustness of the proposed procedure to imprecision in the models

    Pushing the envelope for estimating poses and actions via full 3D reconstruction

    Get PDF
    Estimating poses and actions of human bodies and hands is an important task in the computer vision community due to its vast applications, including human computer interaction, virtual reality and augmented reality, medical image analysis. Challenges: There are many in-the-wild challenges in this task (see chapter 1). Among them, in this thesis, we focused on two challenges which could be relieved by incorporating the 3D geometry: (1) inherent 2D-to-3D ambiguity driven by the non-linear 2D projection process when capturing 3D objects. (2) lack of sufficient and quality annotated datasets due to the high-dimensionality of subjects' attribute space and inherent difficulty in annotating 3D coordinate values. Contributions: We first tried to jointly tackle the 2D-to-3D ambiguity and insufficient data issues by (1) explicitly reconstructing 2.5D and 3D samples and use them as new training data to train a pose estimator. Next, we tried to (2) encode 3D geometry in the training process of the action recognizer to reduce the 2D-to-3D ambiguity. In appendix, we proposed a (3) new hand pose synthetic dataset that can be used for more complete attribute changes and multi-modal experiments in the future. Experiments: Throughout experiments, we found interesting facts: (1) 2.5D depth map reconstruction and data augmentation can improve the accuracy of the depth-based hand pose estimation algorithm, (2) 3D mesh reconstruction can be used to generate a new RGB data and it improves the accuracy of RGB-based dense hand pose estimation algorithm, (3) 3D geometry from 3D poses and scene layouts could be successfully utilized to reduce the 2D-to-3D ambiguity in the action recognition problem.Open Acces

    Egocentric Perception of Hands and Its Applications

    Get PDF

    Computational Learning for Hand Pose Estimation

    Get PDF
    Rapid advances in human–computer interaction interfaces have been promising a realistic environment for gaming and entertainment in the last few years. However, the use of traditional input devices such as trackballs, keyboards, or joysticks has been a bottleneck for natural interactions between a human and computer as two points of freedom of these devices cannot suitably emulate the interactions in a three-dimensional space. Consequently, a comprehensive hand tracking technology is expected as a smart and intuitive option to these input tools to enhance virtual and augmented reality experiences. In addition, the recent emergence of low-cost depth sensing cameras has led to their broad use of RGB-D data in computer vision, raising expectations of a full 3D interpretation of hand movements for human–computer interaction interfaces. Although the use of hand gestures or hand postures has become essential for a wide range of applications in computer games and augmented/virtual reality, 3D hand pose estimation is still an open and challenging problem because of the following reasons: (i) the hand pose exists in a high-dimensional space because each finger and the palm is associated with several degrees of freedom, (ii) the fingers exhibit self-similarity and often occlude to each other, (iii) global 3D rotations make pose estimation more difficult, and (iv) hands only exist in few pixels in images and the noise in acquired data coupled with fast finger movement confounds continuous hand tracking. The success of hand tracking would naturally depend on synthesizing our knowledge of the hand (i.e., geometric shape, constraints on pose configurations) and latent features about hand poses from the RGB-D data stream (i.e., region of interest, key feature points like finger tips and joints, and temporal continuity). In this thesis, we propose novel methods to leverage the paradigm of analysis by synthesis and create a prediction model using a population of realistic 3D hand poses. The overall goal of this work is to design a concrete framework so the computers can learn and understand about perceptual attributes of human hands (i.e., self-occlusions or self-similarities of the fingers) and to develop a pragmatic solution to the real-time hand pose estimation problem implementable on a standard computer. This thesis can be broadly divided into four parts: learning hand (i) from recommendiations of similar hand poses, (ii) from low-dimensional visual representations, (iii) by hallucinating geometric representations, and (iv) from a manipulating object. Each research work covers our algorithmic contributions to solve the 3D hand pose estimation problem. Additionally, the research work in the appendix proposes a pragmatic technique for applying our ideas to mobile devices with low computational power. Following a given structure, we first overview the most relevant works on depth sensor-based 3D hand pose estimation in the literature both with and without manipulating an object. Two different approaches prevalent for categorizing hand pose estimation, model-based methods and appearance-based methods, are discussed in detail. In this chapter, we also introduce some works relevant to deep learning and trials to achieve efficient compression of the network structure. Next, we describe a synthetic 3D hand model and its motion constraints for simulating realistic human hand movements. The section for the primary research work starts in the following chapter. We discuss our attempts to produce a better estimation model for 3D hand pose estimation by learning hand articulations from recommendations of similar poses. Specifically, the unknown pose parameters for input depth data are estimated by collaboratively learning the known parameters of all neighborhood poses. Subsequently, we discuss deep-learned, discriminative, and low-dimensional features and a hierarchical solution of the stated problem based on the matrix completion framework. This work is further extended by incorporating a function of geometric properties on the surface of the hand described by heat diffusion, which is robust to capture both the local geometry of the hand and global structural representations. The problem of the hands interactions with a physical object is also considered in the following chapter. The main insight is that the interacting object can be a source of constraint on hand poses. In this view, we employ pose dependency on the shape of the object to learn the discriminative features of the hand–object interaction, rather than losing hand information caused by partial or full object occlusions. Subsequently, we present a compressive learning technique in the appendix. Our approach is flexible, enabling us to add more layers and go deeper in the deep learning architecture while keeping the number of parameters the same. Finally, we conclude this thesis work by summarizing the presented approaches for hand pose estimation and then propose future directions to further achieve performance improvements through (i) realistically rendered synthetic hand images, (ii) incorporating RGB images as an input, (iii) hand perseonalization, (iv) use of unstructured point cloud, and (v) embedding sensing techniques

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system

    Model-based human upper body tracking using interest points in real-time video

    Get PDF
    Vision-based human motion analysis has received huge attention from researchers because of the number of applications, such as automated surveillance, video indexing, human machine interaction, traffic monitoring, and vehicle navigation. However, it contains several open problems. To date, despite very promising proposed approaches, no explicit solution has been found to solve these open problems efficiently. In this regard, this thesis presents a model-based human upper body pose estimation and tracking system using interest points (IPs) in real-time video. In the first stage, we propose a novel IP-based background-subtraction algorithm to segment the foreground IPs of each frame from the background ones. Afterwards, the foreground IPs of any two consecutive frames are matched to each other using a dynamic hybrid localspatial IP matching algorithm, proposed in this research. The IP matching algorithm starts by using the local feature descriptors of the IPs to find an initial set of possible matches. Then two filtering steps are applied to the results to increase the precision by deleting the mismatched pairs. To improve the recall, a spatial matching process is applied to the remaining unmatched points. Finally, a two-stage hierarchical-global model-based pose estimation and tracking algorithm based on Particle Swarm Optimiation (PSO) is proposed to track the human upper body through consecutive frames. Given the pose and the foreground IPs in the previous frame and the matched points in the current frame, the proposed PSO-based pose estimation and tracking algorithm estimates the current pose hierarchically by minimizing the discrepancy between the hypothesized pose and the real matched observed points in the first stage. Then a global PSO is applied to the pose estimated by the first stage to do a consistency check and pose refinement
    corecore