17 research outputs found

    Improving 3D Keypoint Detection from Noisy Data Using Growing Neural Gas

    Get PDF
    3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration

    Geometric 3D point cloud compression

    Get PDF
    The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.This work has been supported by the Spanish Government DPI2013-40534-R grant

    3D reconstruction of medical images from slices automatically landmarked with growing neural models

    Get PDF
    In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid

    Integrating Multiple Uncertain Views of a Static Scene Acquired by an Agile Camera System

    Get PDF
    This paper addresses the problem of merging multiple views of a static scene into a common coordinate frame, explicitly considering uncertainty. It assumes that a static world is observed by an agile vision system, whose movements are known with a limited precision, and whose observations are inaccurate and incomplete. It concentrates on acquiring uncertain three-dimensional information from multiple views, rather than on modeling or representing the information at higher levels of abstraction. Two particular problems receive attention: identifying the transformation between two viewing positions; and understanding how errors and uncertainties propagate as a result of applying the transformation. The first is solved by identifying the forward kinematics of the agile camera system. The second is solved by first treating a measurement of camera position and orientation as a uniformly distributed random vector whose component variances are related to the resolution of the encoding potentiometers, then treating an object position measurement as a normally distributed random vector whose component variances are experimentally derived, and finally determining the uncertainty of the merged points as functions of these variances

    Collaborative Robotic Path Planning for Industrial Spraying Operations on Complex Geometries

    Get PDF
    Implementation of automated robotic solutions for complex tasks currently faces a few major hurdles. For instance, lack of effective sensing and task variability – especially in high-mix/low-volume processes – creates too much uncertainty to reliably hard-code a robotic work cell. Current collaborative frameworks generally focus on integrating the sensing required for a physically collaborative implementation. While this paradigm has proven effective for mitigating uncertainty by mixing human cognitive function and fine motor skills with robotic strength and repeatability, there are many instances where physical interaction is impractical but human reasoning and task knowledge is still needed. The proposed framework consists of key modules such as a path planner, path simulator, and result simulator. An integrated user interface facilitates the operator to interact with these modules and edit the path plan before ultimately approving the task for automatic execution by a manipulator that need not be collaborative. Application of the collaborative framework is illustrated for a pressure washing task in a remanufacturing environment that requires one-off path planning for each part. The framework can also be applied to various other tasks, such as spray-painting, sandblasting, deburring, grinding, and shot peening. Specifically, automated path planning for industrial spraying operations offers the potential to automate surface preparation and coating in such environments. Autonomous spray path planners in the literature have been limited to generally continuous and convex surfaces, which is not true of most real parts. There is a need for planners that consistently handle concavities and discontinuities, such as sharp corners, holes, protrusions or other surface abnormalities when building a path. The path planner uses a slicing-based method to generate path trajectories. It identifies and quantifies the importance of concavities and surface abnormalities and whether they should be considered in the path plan by comparing the true part geometry to the convex hull path. If necessary, the path is then adapted by adjusting the movement speed or offset distance at individual points along the path. Which adaptive method is more effective and the trade-offs associated with adapting the path are also considered in the development of the path planner

    Enhanced life-size holographic telepresence framework with real-time three-dimensional reconstruction for dynamic scene

    Get PDF
    Three-dimensional (3D) reconstruction has the ability to capture and reproduce 3D representation of a real object or scene. 3D telepresence allows the user to feel the presence of remote user that was remotely transferred in a digital representation. Holographic display is one of alternatives to discard wearable hardware restriction, it utilizes light diffraction to display 3D images to the viewers. However, to capture a real-time life-size or a full-body human is still challenging since it involves a dynamic scene. The remaining issue arises when dynamic object to be reconstructed is always moving and changes shapes and required multiple capturing views. The life-size data captured were multiplied exponentially when working with more depth cameras, it can cause the high computation time especially involving dynamic scene. To transfer high volume 3D images over network in real-time can also cause lag and latency issue. Hence, the aim of this research is to enhance life-size holographic telepresence framework with real-time 3D reconstruction for dynamic scene. There are three stages have been carried out, in the first stage the real-time 3D reconstruction with the Marching Square algorithm is combined during data acquisition of dynamic scenes captured by life-size setup of multiple Red Green Blue-Depth (RGB-D) cameras. Second stage is to transmit the data that was acquired from multiple RGB-D cameras in real-time and perform double compression for the life-size holographic telepresence. The third stage is to evaluate the life-size holographic telepresence framework that has been integrated with the real-time 3D reconstruction of dynamic scenes. The findings show that by enhancing life-size holographic telepresence framework with real-time 3D reconstruction, it has reduced the computation time and improved the 3D representation of remote user in dynamic scene. By running the double compression for the life-size holographic telepresence, 3D representations in life-size is smooth. It has proven can minimize the delay or latency during acquired frames synchronization in remote communications
    corecore