8,680 research outputs found

    A Depth Space Approach for Evaluating Distance to Objects -- with Application to Human-Robot Collision Avoidance

    Get PDF
    We present a novel approach to estimate the distance between a generic point in the Cartesian space and objects detected with a depth sensor. This information is crucial in many robotic applications, e.g., for collision avoidance, contact point identification, and augmented reality. The key idea is to perform all distance evaluations directly in the depth space. This allows distance estimation by considering also the frustum generated by the pixel on the depth image, which takes into account both the pixel size and the occluded points. Different techniques to aggregate distance data coming from multiple object points are proposed. We compare the Depth space approach with the commonly used Cartesian space or Configuration space approaches, showing that the presented method provides better results and faster execution times. An application to human-robot collision avoidance using a KUKA LWR IV robot and a Microsoft Kinect sensor illustrates the effectiveness of the approach

    Real-time computation of distance to dynamic obstacles with multiple depth sensors

    Get PDF
    We present an efficient method to evaluate distances between dynamic obstacles and a number of points of interests (e.g., placed on the links of a robot) when using multiple depth cameras. A depth-space oriented discretization of the Cartesian space is introduced that represents at best the workspace monitored by a depth camera, including occluded points. A depth grid map can be initialized off line from the arrangement of the multiple depth cameras, and its peculiar search characteristics allows fusing on line the information given by the multiple sensors in a very simple and fast way. The real-time performance of the proposed approach is shown by means of collision avoidance experiments where two Kinect sensors monitor a human-robot coexistence task

    Collision avoidance in human-robot interaction using kinect vision system combined with robot’s model and data

    Get PDF
    Human-Robot Interaction (HRI) is a largely ad-dressed subject today. Collision avoidance is one of main strategies that allow space sharing and interaction without contact between human and robot. It is thus usual to use a 3D depth camera sensor which may involves issues related to occluded robot in camera view. While several works overcame this issue by applying infinite depth principle or increasing the number of cameras, we developed in the current work a new and an original approach based on the combination of a 3D depth sensor (Microsoft¼ Kinect V2) and the proprioceptive robot position sensors. This method uses a principle of limited safety contour around the obstacle to dynamically estimate the robot-obstacle distance, and then generate the repulsive force that controls the robot. For validation, our approach is applied in real time to avoid collision between dynamical obstacles (humans or objects) and the end-effector of a real 7-dof Kuka LBR iiwa collaborative robot.Several strategies based on distancing and its combination with dodging were tested. Results have shown a reactive and efficient collision avoidance, by ensuring a minimum obstacle-robot distance (of ≈ 240mm), even when the robot is in an occluded zone in the Kinect camera view

    Collision avoidance interaction between human and a hidden robot based on kinect and robot data fusion

    Get PDF
    Human-Robot Interaction (HRI) is a largely ad- dressed subject today. In order to ensure co-existence and space sharing between human and robot, collision avoidance is one of the main strategies for interaction between them without contact. It is thus usual to use a 3D depth camera sensor (Microsof Kinect V2) which may involve issues related to occluded robot in the camera view. While several works overcame this issue by applying infinite depth principle or increasing the number of cameras, in the current work we developed and applied an original new approach that combines data of one 3D depth sensor (Kinect) and proprioceptive robot sensors. This method uses the principle of limited safety contour around the obstacle to dynamically estimate the robot-obstacle distance, and then generate the repulsive force that controls the robot. For validation, our approach is applied in real time to avoid collisions between dynamical obstacles (humans or objects) and the end- effector of a real 7-dof Kuka LBR iiwa collaborative robot. Our method is experimentally compared with existing methods based on infinite depth principle when the robot is hidden by the obstacle with respect to the camera view. Results showed smoother behavior and more stability of the robot using our method. Extensive experiments of our method, using several strategies based on distancing and its combination with dodging were done. Results have shown a reactive and efficient collision avoidance, by ensuring a minimum obstacle-robot distance (of ≈ 240mm), even when the robot is in an occluded zone in the Kinect camera view

    Motion Planning for the On-orbit Grasping of a Non-cooperative Target Satellite with Collision Avoidance

    Get PDF
    A method for grasping a tumbling noncooperative target is presented, which is based on nonlinear optimization and collision avoidance. Motion constraints on the robot joints as well as on the end-effector forces are considered. Cost functions of interest address the robustness of the planned solutions during the tracking phase as well as actuation energy. The method is applied in simulation to different operational scenarios
    • 

    corecore