37 research outputs found

    Visual pose estimation system for autonomous rendezvous of spacecraft

    Get PDF
    In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range

    Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite

    Get PDF
    This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission

    Robust On-Manifold Optimization for Uncooperative Space Relative Navigation with a Single Camera

    Get PDF
    Optical cameras are gaining popularity as the suitable sensor for relative navigation in space due to their attractive sizing, power, and cost properties when compared with conventional flight hardware or costly laser-based systems. However, a camera cannot infer depth information on its own, which is often solved by introducing complementary sensors or a second camera. In this paper, an innovative model-based approach is demonstrated to estimate the six-dimensional pose of a target relative to the chaser spacecraft using solely a monocular setup. The observed facet of the target is tackled as a classification problem, where the three-dimensional shape is learned offline using Gaussian mixture modeling. The estimate is refined by minimizing two different robust loss functions based on local feature correspondences. The resulting pseudomeasurements are processed and fused with an extended Kalman filter. The entire optimization framework is designed to operate directly on the SE(3) manifold, uncoupling the process and measurement models from the global attitude state representation. It is validated on realistic synthetic and laboratory datasets of a rendezvous trajectory with the complex spacecraft Envisat, demonstrating estimation of the relative pose with high accuracy over full tumbling motion. Further evaluation is performed on the open-source SPEED dataset

    Fault-tolerant feature-based estimation of space debris motion and inertial properties

    Get PDF
    The exponential increase of the needs of people in the modern society and the contextual development of the space technologies have led to a significant use of the lower Earth’s orbits for placing artificial satellites. The current overpopulation of these orbits also increased the interest of the major space agencies in technologies for the removal of at least the biggest spacecraft that have reached their end-life or have failed their mission. One of the key functionalities required in a mission for removing a non-cooperative spacecraft is the assessment of its kinematics and inertial properties. In a few cases, this information can be approximated by ground observations. However, a re-assessment after the rendezvous phase is of critical importance for refining the capture strategies preventing accidents. The CADET program (CApture and DE-orbiting Technologies), funded by Regione Piemonte and led by Aviospace s.r.l., involved Politecnico di Torino in the research for solutions to the above issue. This dissertation proposes methods and algorithms for estimating the location of the center of mass, the angular rate, and the moments of inertia of a passive object. These methods require that the chaser spacecraft be capable of tracking several features of the target through passive vision sensors. Because of harsh lighting conditions in the space environment, feature-based methods should tolerate temporary failures in detecting features. The principal works on this topic do not consider this important aspect, making it a characteristic trait of the proposed methods. Compared to typical v treatments of the estimation problem, the proposed techniques do not depend solely on state observers. However, methods for recovering missing information, like compressive sampling techniques, are used for preprocessing input data to support the efficient usage of state observers. Simulation results showed accuracy properties that are comparable to those of the best-known methods already proposed in the literature. The developed algorithms were tested in the laboratory staged by Aviospace s.r.l., whose name is CADETLab. The results of the experimental tests suggested the practical applicability of such algorithms for supporting a real active removal mission

    Satellite Articulation Sensing using Computer Vision

    Get PDF
    Autonomous on-orbit satellite servicing benefits from an inspector satellite that can gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. A method for building an articulated model from monocular imagery using tracked feature points and the known relative inspection route is developed. Two methods are also developed for tracking the articulation of a satellite in real-time given an articulated model using both tracked feature points and image silhouettes. Performance is evaluated for multiple inspection routes and the effect of inspection route noise is assessed. Additionally, a satellite model is built and used to collect stop-motion images simulating articulated motion over an inspection route under simulated space illumination. The images are used in the silhouette articulation tracking method and successful tracking is demonstrated qualitatively. Finally, a human pose tracking algorithm is modified for tracking the satellite articulation demonstrating the applicability of human tracking methods to satellite articulation tracking methods when an articulated model is available

    Computer vision-based localization and mapping of an unknown, uncooperative and spinning target for spacecraft proximity operations

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 399-410).Prior studies have estimated that there are over 100 potential target objects near the Geostationary Orbit belt that are spinning at rates of over 20 rotations per minute. For a number of reasons, it may be desirable to operate in close proximity to these objects for the purposes of inspection, docking and repair. Many of them have an unknown geometric appearance, are uncooperative and non-communicative. These types of characteristics are also shared by a number of asteroid rendezvous missions. In order to safely operate in close proximity to an object in space, it is important to know the target object's position and orientation relative to the inspector satellite, as well as to build a three-dimensional geometric map of the object for relative navigation in future stages of the mission. This type of problem can be solved with many of the typical Simultaneous Localization and Mapping (SLAM) algorithms that are found in the literature. However, if the target object is spinning with signicant angular velocity, it is also important to know the linear and angular velocity of the target object as well as its center of mass, principal axes of inertia and its inertia matrix. This information is essential to being able to propagate the state of the target object to a future time, which is a key capability for any type of proximity operations mission. Most of the typical SLAM algorithms cannot easily provide these types of estimates for high-speed spinning objects. This thesis describes a new approach to solving a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. It is capable of estimating a geometric map of the target object, as well as its position, orientation, linear velocity, angular velocity, center of mass, principal axes and ratios of inertia. This allows the state of the target object to be propagated to a future time step using Newton's Second Law and Euler's Equation of Rotational Motion, and thereby allowing this future state to be used by the planning and control algorithms for the target spacecraft. In order to properly evaluate this new approach, it is necessary to gather experiby Brent Edward Tweddle.Ph. D

    Shape, motion, and inertial parameter estimation of space objects using teams of cooperative vision sensors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005."February 2005."Includes bibliographical references (leaves 133-140).Future space missions are expected to use autonomous robotic systems to carry out a growing number of tasks. These tasks may include the assembly, inspection, and maintenance of large space structures; the capture and servicing of satellites; and the redirection of space debris that threatens valuable spacecraft. Autonomous robotic systems will require substantial information about the targets with which they interact, including their motions, dynamic model parameters, and shape. However, this information is often not available a priori, and therefore must be estimated in orbit. This thesis develops a method for simultaneously estimating dynamic state, model parameters, and geometric shape of arbitrary space targets, using information gathered from range imaging sensors. The method exploits two key features of this application: (1) the dynamics of targets in space are highly deterministic and can be accurately modeled; and (2) several sensors will be available to provide information from multiple viewpoints. These features enable an estimator design that is not reliant on feature detection, model matching, optical flow, or other computation-intensive pixel-level calculations. It is therefore robust to the harsh lighting and sensing conditions found in space. Further, these features enable an estimator design that can be implemented in real- time on space-qualified hardware. The general solution approach consists of three parts that effectively decouple spatial- and time-domain estimations. The first part, referred to as kinematic data fusion, condenses detailed range images into coarse estimates of the target's high-level kinematics (position, attitude, etc.).(cont.) A Kalman filter uses the high-fidelity dynamic model to refine these estimates and extract the full dynamic state and model parameters of the target. With an accurate understanding of target motions, shape estimation reduces to the stochastic mapping of a static scene. This thesis develops the estimation architecture in the context of both rigid and flexible space targets. Simulations and experiments demonstrate the potential of the approach and its feasibility in practical systems.by Matthew D. Lichter.Ph.D

    Advanced LIDAR-based techniques for autonomous navigation of spaceborne and airborne platforms

    Get PDF
    The main goal of this PhD thesis is the development and performance assessment of innovative techniques for the autonomous navigation of aerospace platforms by exploiting data acquired by electro-optical sensors. Specifically, the attention is focused on active LIDAR systems since they globally provide a higher degree of autonomy with respect to passive sensors. Two different areas of research are addressed, namely the autonomous relative navigation of multi-satellite systems and the autonomous navigation of Unmanned Aerial Vehicles. The global aim is to provide solutions able to improve estimation accuracy, computational load, and overall robustness and reliability with respect to the techniques available in the literature. In the space field, missions like on-orbit servicing and active debris removal require a chaser satellite to perform autonomous orbital maneuvers in close-proximity of an uncooperative space target. In this context, a complete pose determination architecture is here proposed, which relies exclusively on three-dimensional measurements (point clouds) provided by a LIDAR system as well as on the knowledge of the target geometry. Customized solutions are envisaged at each step of the pose determination process (acquisition, tracking, refinement) to ensure adequate accuracy level while simultaneously limiting the computational load with respect to other approaches available in the literature. Specific strategies are also foreseen to ensure process robustness by autonomously detecting algorithms' failures. Performance analysis is realized by means of a simulation environment which is conceived to realistically reproduce LIDAR operation, target geometry, and multi-satellite relative dynamics in close-proximity. An innovative method to design trajectories for target monitoring, which are reliable for on-orbit servicing and active debris removal applications since they satisfy both safety and observation requirements, is also presented. On the other hand, the problem of localization and mapping of Unmanned Aerial Vehicles is also tackled since it is of utmost importance to provide autonomous safe navigation capabilities in mission scenarios which foresee flights in complex environments, such as GPS denied or challenging. Specifically, original solutions are proposed for the localization and mapping steps based on the integration of LIDAR and inertial data. Also in this case, particular attention is focused on computational load and robustness issues. Algorithms' performance is evaluated through off-line simulations carried out on the basis of experimental data gathered by means of a purposely conceived setup within an indoor test scenario
    corecore