3 research outputs found

    3D Orientation Estimation of Industrial Parts from 2D Images using Neural Networks

    No full text
    International audienceIn this paper we propose a pose regression method employing a convolutional neural network (CNN) fed with single 2D images to estimate the 3D orientation of a specific industrial part. The network training dataset is generated by rendering pose-views from a textured CAD model to compensate for the lack of real images and their associated position label. Using several lighting conditions and material reflectances increases the robustness of the prediction and allows to anticipate challenging industrial situations. We show that using a geodesic loss function, the network is able to estimate a rendered view pose with a 5 degrees accuracy while inferring from real images gives visually convincing results suitable for any pose refinement processes

    Visual Servo Based Space Robotic Docking for Active Space Debris Removal

    Get PDF
    This thesis developed a 6DOF pose detection algorithm using machine learning capable of providing the orientation and location of an object in various lighting conditions and at different angles, for the purposes of space robotic rendezvous and docking control. The computer vision algorithm was paired with a virtual robotic simulation to test the feasibility of using the proposed algorithm for visual servo. This thesis also developed a method for generating virtual training images and corresponding ground truth data including both location and orientation information. Traditional computer vision techniques struggle to determine the 6DOF pose of an object when certain colors or edges are not found, therefore training a network is an optimal choice. The 6DOF pose detection algorithm was implemented on MATLAB and Python. The robotic simulation was implemented on Simulink and ROS Gazebo. Finally, the generation of training data was done with Python and Blender
    corecore