2 research outputs found

    Comparison of Three Machine Vision Pose Estimation Systems Based on Corner, Line, and Ellipse Extraction for Satellite Grasping

    Get PDF
    The primary objective of this research was to use three different types of features (corners, lines, and ellipses) for the purpose of satellite grasping with a machine vision-based pose estimation system. The corner system is used to track sharp corners or small features (holes or bolt) in the satellite; the lines system tracks sharp edges while the ellipse system tracks circular features in the satellite. The corner and line system provided 6 degrees of freedom (DOF) pose (rotation matrix and translation vector) of the satellite with respect to the camera frame, while the ellipse system provided 5 DOF pose (normal vector and center position) of the circular feature with respect to the camera frame. Satellite grasping is required for on-orbit satellite servicing and refueling. Three machine vision estimation systems (base on line, corner, and ellipse extraction) were studied and compared using a simulation environment. The corner extraction system was based on the Shi-Tomasi method; the line extraction system was based on the Hough transform; while the ellipse system is based on the fast ellipse extractor. Each system tracks its corresponding most prominent feature of the satellite. In order to evaluate the performance of each position estimation system, six maneuvers, three in translation (xyz) and three in rotation (roll pitch yaw), three different initial positions, and three different levels of Gaussian noise were considered in the virtual environment. Also, a virtual and real approach using a robotic manipulator sequence was performed in order to predict how each system could perform in a real application. Each system was compared using the mean and variance of the translational and rotational position estimation error. The virtual environment features a CAD model of a satellite created using SolidWorks which contained three common satellite features; that is a square plate, a marman ring, and a thruster. The corner and line pose estimation systems increased accuracy and precision as the distance decreases allowing for up to 2 centimeters of accuracy in translation. However, under heavy noise situations the corner position estimation system lost tracking and could not recover, while the line position estimation system did not lose track. The ellipse position estimation system was more robust, allowing the system to automatically recover, if tracking was lost, with accuracy up to 4 centimeters. During both approach sequences the ellipse system was the most robust, being able to track the satellite consistently. The corner system could not track the system throughout the approach in real or virtual approaches and the line system could track the satellite during the virtual approach sequence

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector
    corecore