584 research outputs found

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector

    Comparison of Three Machine Vision Pose Estimation Systems Based on Corner, Line, and Ellipse Extraction for Satellite Grasping

    Get PDF
    The primary objective of this research was to use three different types of features (corners, lines, and ellipses) for the purpose of satellite grasping with a machine vision-based pose estimation system. The corner system is used to track sharp corners or small features (holes or bolt) in the satellite; the lines system tracks sharp edges while the ellipse system tracks circular features in the satellite. The corner and line system provided 6 degrees of freedom (DOF) pose (rotation matrix and translation vector) of the satellite with respect to the camera frame, while the ellipse system provided 5 DOF pose (normal vector and center position) of the circular feature with respect to the camera frame. Satellite grasping is required for on-orbit satellite servicing and refueling. Three machine vision estimation systems (base on line, corner, and ellipse extraction) were studied and compared using a simulation environment. The corner extraction system was based on the Shi-Tomasi method; the line extraction system was based on the Hough transform; while the ellipse system is based on the fast ellipse extractor. Each system tracks its corresponding most prominent feature of the satellite. In order to evaluate the performance of each position estimation system, six maneuvers, three in translation (xyz) and three in rotation (roll pitch yaw), three different initial positions, and three different levels of Gaussian noise were considered in the virtual environment. Also, a virtual and real approach using a robotic manipulator sequence was performed in order to predict how each system could perform in a real application. Each system was compared using the mean and variance of the translational and rotational position estimation error. The virtual environment features a CAD model of a satellite created using SolidWorks which contained three common satellite features; that is a square plate, a marman ring, and a thruster. The corner and line pose estimation systems increased accuracy and precision as the distance decreases allowing for up to 2 centimeters of accuracy in translation. However, under heavy noise situations the corner position estimation system lost tracking and could not recover, while the line position estimation system did not lose track. The ellipse position estimation system was more robust, allowing the system to automatically recover, if tracking was lost, with accuracy up to 4 centimeters. During both approach sequences the ellipse system was the most robust, being able to track the satellite consistently. The corner system could not track the system throughout the approach in real or virtual approaches and the line system could track the satellite during the virtual approach sequence

    High Accuracy Tracking of Space-Borne Non-Cooperative Targets

    Get PDF

    Visual Servo Based Space Robotic Docking for Active Space Debris Removal

    Get PDF
    This thesis developed a 6DOF pose detection algorithm using machine learning capable of providing the orientation and location of an object in various lighting conditions and at different angles, for the purposes of space robotic rendezvous and docking control. The computer vision algorithm was paired with a virtual robotic simulation to test the feasibility of using the proposed algorithm for visual servo. This thesis also developed a method for generating virtual training images and corresponding ground truth data including both location and orientation information. Traditional computer vision techniques struggle to determine the 6DOF pose of an object when certain colors or edges are not found, therefore training a network is an optimal choice. The 6DOF pose detection algorithm was implemented on MATLAB and Python. The robotic simulation was implemented on Simulink and ROS Gazebo. Finally, the generation of training data was done with Python and Blender

    NASA Automated Rendezvous and Capture Review. A compilation of the abstracts

    Get PDF
    This document presents a compilation of abstracts of papers solicited for presentation at the NASA Automated Rendezvous and Capture Review held in Williamsburg, VA on November 19-21, 1991. Due to limitations on time and other considerations, not all abstracts could be presented during the review. The organizing committee determined however, that all abstracts merited availability to all participants and represented data and information reflecting state-of-the-art of this technology which should be captured in one document for future use and reference. The organizing committee appreciates the interest shown in the review and the response by the authors in submitting these abstracts

    Robotic Manipulation and Capture in Space: A Survey

    Get PDF
    Space exploration and exploitation depend on the development of on-orbit robotic capabilities for tasks such as servicing of satellites, removing of orbital debris, or construction and maintenance of orbital assets. Manipulation and capture of objects on-orbit are key enablers for these capabilities. This survey addresses fundamental aspects of manipulation and capture, such as the dynamics of space manipulator systems (SMS), i.e., satellites equipped with manipulators, the contact dynamics between manipulator grippers/payloads and targets, and the methods for identifying properties of SMSs and their targets. Also, it presents recent work of sensing pose and system states, of motion planning for capturing a target, and of feedback control methods for SMS during motion or interaction tasks. Finally, the paper reviews major ground testing testbeds for capture operations, and several notable missions and technologies developed for capture of targets on-orbit

    Reliable localization methods for intelligent vehicles based on environment perception

    Get PDF
    Mención Internacional en el título de doctorIn the near past, we would see autonomous vehicles and Intelligent Transport Systems (ITS) as a potential future of transportation. Today, thanks to all the technological advances in recent years, the feasibility of such systems is no longer a question. Some of these autonomous driving technologies are already sharing our roads, and even commercial vehicles are including more Advanced Driver-Assistance Systems (ADAS) over the years. As a result, transportation is becoming more efficient and the roads are considerably safer. One of the fundamental pillars of an autonomous system is self-localization. An accurate and reliable estimation of the vehicle’s pose in the world is essential to navigation. Within the context of outdoor vehicles, the Global Navigation Satellite System (GNSS) is the predominant localization system. However, these systems are far from perfect, and their performance is degraded in environments with limited satellite visibility. Additionally, their dependence on the environment can make them unreliable if it were to change. Accordingly, the goal of this thesis is to exploit the perception of the environment to enhance localization systems in intelligent vehicles, with special attention to their reliability. To this end, this thesis presents several contributions: First, a study on exploiting 3D semantic information in LiDAR odometry is presented, providing interesting insights regarding the contribution to the odometry output of each type of element in the scene. The experimental results have been obtained using a public dataset and validated on a real-world platform. Second, a method to estimate the localization error using landmark detections is proposed, which is later on exploited by a landmark placement optimization algorithm. This method, which has been validated in a simulation environment, is able to determine a set of landmarks so the localization error never exceeds a predefined limit. Finally, a cooperative localization algorithm based on a Genetic Particle Filter is proposed to utilize vehicle detections in order to enhance the estimation provided by GNSS systems. Multiple experiments are carried out in different simulation environments to validate the proposed method.En un pasado no muy lejano, los vehículos autónomos y los Sistemas Inteligentes del Transporte (ITS) se veían como un futuro para el transporte con gran potencial. Hoy, gracias a todos los avances tecnológicos de los últimos años, la viabilidad de estos sistemas ha dejado de ser una incógnita. Algunas de estas tecnologías de conducción autónoma ya están compartiendo nuestras carreteras, e incluso los vehículos comerciales cada vez incluyen más Sistemas Avanzados de Asistencia a la Conducción (ADAS) con el paso de los años. Como resultado, el transporte es cada vez más eficiente y las carreteras son considerablemente más seguras. Uno de los pilares fundamentales de un sistema autónomo es la autolocalización. Una estimación precisa y fiable de la posición del vehículo en el mundo es esencial para la navegación. En el contexto de los vehículos circulando en exteriores, el Sistema Global de Navegación por Satélite (GNSS) es el sistema de localización predominante. Sin embargo, estos sistemas están lejos de ser perfectos, y su rendimiento se degrada en entornos donde la visibilidad de los satélites es limitada. Además, los cambios en el entorno pueden provocar cambios en la estimación, lo que los hace poco fiables en ciertas situaciones. Por ello, el objetivo de esta tesis es utilizar la percepción del entorno para mejorar los sistemas de localización en vehículos inteligentes, con una especial atención a la fiabilidad de estos sistemas. Para ello, esta tesis presenta varias aportaciones: En primer lugar, se presenta un estudio sobre cómo aprovechar la información semántica 3D en la odometría LiDAR, generando una base de conocimiento sobre la contribución de cada tipo de elemento del entorno a la salida de la odometría. Los resultados experimentales se han obtenido utilizando una base de datos pública y se han validado en una plataforma de conducción del mundo real. En segundo lugar, se propone un método para estimar el error de localización utilizando detecciones de puntos de referencia, que posteriormente es explotado por un algoritmo de optimización de posicionamiento de puntos de referencia. Este método, que ha sido validado en un entorno de simulación, es capaz de determinar un conjunto de puntos de referencia para el cual el error de localización nunca supere un límite previamente fijado. Por último, se propone un algoritmo de localización cooperativa basado en un Filtro Genético de Partículas para utilizar las detecciones de vehículos con el fin de mejorar la estimación proporcionada por los sistemas GNSS. El método propuesto ha sido validado mediante múltiples experimentos en diferentes entornos de simulación.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridSecretario: Joshué Manuel Pérez Rastelli.- Secretario: Jorge Villagrá Serrano.- Vocal: Enrique David Martí Muño

    Intent-Recognition-Based Traded Control for Telerobotic Assembly over High-Latency Telemetry

    Get PDF
    As we deploy robotic manipulation systems into unstructured real-world environments, the tasks which those robots are expected to perform grow very quickly in complexity. These tasks require a greater number of possible actions, more variable environmental conditions, and larger varieties of objects and materials which need to be manipulated. This in turn leads to a greater number of ways in which elements of a task can fail. When the cost of task failure is high, such as in the case of surgery or on-orbit robotic interventions, effective and efficient task recovery is essential. Despite ever-advancing capabilities, however, the current and near future state-of-the-art in fully autonomous robotic manipulation is still insufficient for many tasks in these critical applications. Thus, successful application of robotic manipulation in many application domains still necessitates a human operator to directly teleoperate the robots over some communications infrastructure. However, any such infrastructure always incurs some unavoidable round-trip telemetry latency depending on the distances involved and the type of remote environment. While direct teleoperation is appropriate when a human operator is physically close to the robots being controlled, there are still many applications in which such proximity is infeasible. In applications which require a robot to be far from its human operator, this latency can approach the speed of the relevant task dynamics, and performing the task with direct telemanipulation can become increasingly difficult, if not impossible. For example, round-trip delays for ground-controlled on-orbit robotic manipulation can reach multiple seconds depending on the infrastructure used and the location of the remote robot. The goal of this thesis is to advance the state-of-the art in semi-autonomous telemanipulation under multi-second round-trip communications latency between a human operator and remote robot in order to enable more telerobotic applications. We propose a new intent-recognition-based traded control (IRTC) approach which automatically infers operator intent and executes task elements which the human operator would otherwise be unable to perform. What makes our approach more powerful than the current approaches is that we prioritize preserving the operator's direct manual interaction with the remote environment while only trading control over to an autonomous subsystem when the operator-local intent recognition system automatically determines what the operator is trying to accomplish. This enables operators to perform unstructured and a priori unplanned actions in order to quickly recover from critical task failures. Furthermore, this thesis also describes a methodology for introducing and improving semi-autonomous control in critical applications. Specifically, this thesis reports (1) the demonstration of a prototype system for IRTC-based grasp assistance in the context of transatlantic telemetry delays, (2) the development of a systems framework for IRTC in semi-autonomous telemanipulation, and (3) an evaluation of the usability and efficacy of that framework with an increasingly complex assembly task. The results from our human subjects experiments show that, when incorporated with sufficient lower-level capabilities, IRTC is a promising approach to extend the reach and capabilities of on-orbit telerobotics and future in-space operations

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Marshall Space Flight Center Research and Technology Report 2018

    Get PDF
    Many of NASAs missions would not be possible if it were not for the investments made in research advancements and technology development efforts. The technologies developed at Marshall Space Flight Center contribute to NASAs strategic array of missions through technology development and accomplishments. The scientists, researchers, and technologists of Marshall Space Flight Center who are working these enabling technology efforts are facilitating NASAs ability to fulfill the ambitious goals of innovation, exploration, and discovery
    corecore