4 research outputs found

    Recognition of Planar Segments in Point Cloud Based on Wavelet Transform

    Get PDF
    Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies

    3-D Integration of Robot Vision and Laser Data With Semiautomatic Calibration in Augmented Reality Stereoscopic Visual Interface

    No full text
    This paper proposes an augmented reality visualization interface to simultaneously present visual and laser sensors information further enhanced by stereoscopic viewing and 3-D graphics. The use of graphic elements is proposed to represent laser measurements that are aligned to video information in 3-D space. This methodology enables an operator to intuitively comprehend scene layout and proximity information and so to respond in an accurate and timely manner. The use of graphic elements to assist teleoperation, sometime discussed in the literature, is here proposed following an innovative approach that aligns virtual and real objects in 3-D space and color them suitably to facilitate comprehension of objects proximity during navigation. This paper is developed based on authors' previous experience on stereoscopic teleoperation. The approach is experimented on a real telerobotic system, where a user operates a mobile robot located several kilometers apart. The result showed simplicity and effectiveness of the proposed approach.Peer reviewe

    Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

    Get PDF
    Robots are required in many jobs. The jobs related to tele-operation may be very challenging and often require reaching a destination quickly and with minimum collisions. In order to succeed in these jobs, human operators are asked to tele-operate a robot manually through a user interface. The design of a user interface and of the information provided in it, become therefore critical elements for the successful completion of robot tele-operation tasks. Effective and timely robot tele-navigation mainly relies on the intuitiveness provided by the interface and on the richness and presentation of the feedback given. This project investigated the use of both haptic and visual feedbacks in a user interface for robot tele-navigation. The aim was to overcome some of the limitations observed in a state of the art works, turning what is sometimes described as contrasting into an added value to improve tele-navigation performance. The key issue is to combine different human sensory modalities in a coherent way and to benefit from 3-D vision too. The proposed new approach was inspired by how visually impaired people use walking sticks to navigate. Haptic feedback may provide helpful input to a user to comprehend distances to surrounding obstacles and information about the obstacle distribution. This was proposed to be achieved entirely relying on on-board range sensors, and by processing this input through a simple scheme that regulates magnitude and direction of the environmental force-feedback provided to the haptic device. A specific algorithm was also used to render the distribution of very close objects to provide appropriate touch sensations. Scene visualization was provided by the system and it was shown to a user coherently to haptic sensation. Different visualization configurations, from multi-viewpoint observation to 3-D visualization, were proposed and rigorously assessed through experimentations, to understand the advantages of the proposed approach and performance variations among different 3-D display technologies. Over twenty users were invited to participate in a usability study composed by two major experiments. The first experiment focused on a comparison between the proposed haptic-feedback strategy and a typical state of the art approach. It included testing with a multi-viewpoint visual observation. The second experiment investigated the performance of the proposed haptic-feedback strategy when combined with three different stereoscopic-3D visualization technologies. The results from the experiments were encouraging and showed good performance with the proposed approach and an improvement over literature approaches to haptic feedback in robot tele-operation. It was also demonstrated that 3-D visualization can be beneficial for robot tele-navigation and it will not contrast with haptic feedback if it is properly aligned to it. Performance may vary with different 3-D visualization technologies, which is also discussed in the presented work
    corecore