3,250 research outputs found

    Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements

    Get PDF
    Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite's relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter's efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver

    Monocular-Based Pose Determination of Uncooperative Known and Unknown Space Objects

    Get PDF
    In order to support spacecraft proximity operations, such as on-orbit servicing and spacecraft formation flying, several vision-based techniques exist to determine the relative pose of an uncooperative orbiting object with respect to the spacecraft. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously by making use of only optical measurements. In this paper, we investigate two vision-based approaches for pose estimation of uncooperative orbiting targets: one that is general and versatile such that it does not require a priori knowledge of any information of the target, and the other one that requires knowledge of the target's shape geometry. The former uses an estimation algorithm of translational and rotational dynamics to sequentially perform simultaneous pose determination and 3D shape reconstruction of the unknown target, while the latter relies on a known 3D model of the target's geometry to provide a point-by-point pose solution. The architecture and implementation of both methods are presented and their achievable performance is evaluated through numerical simulations. In addition, a computer vision processing strategy for feature detection and matching and the Structure from Motion (SfM) algorithm for on-board 3D reconstruction are also discussed and validated by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Metrological characterization of a vision-based system for relative pose measurements with fiducial marker mapping for spacecrafts

    Get PDF
    An improved approach for the measurement of the relative pose between a target and a chaser spacecraft is presented. The selected method is based on a single camera, which can be mounted on the chaser, and a plurality of fiducial markers, which can be mounted on the external surface of the target. The measurement procedure comprises of a closed-form solution of the Perspective from n Points (PnP) problem, a RANdom SAmple Consensus (RANSAC) procedure, a non-linear local optimization and a global Bundle Adjustment refinement of the marker map and relative poses. A metrological characterization of the measurement system is performed using an experimental set-up that can impose rotations combined with a linear translation and can measure them. The rotation and position measurement errors are calculated with reference instrumentations and their uncertainties are evaluated by the Monte Carlo method. The experimental laboratory tests highlight the significant improvements provided by the Bundle Adjustment refinement. Moreover, a set of possible influencing physical parameters are defined and their correlations with the rotation and position errors and uncertainties are analyzed. Using both numerical quantitative correlation coefficients and qualitative graphical representations, the most significant parameters for the final measurement errors and uncertainties are determined. The obtained results give clear indications and advice for the design of future measurement systems and for the selection of the marker positioning on a satellite surface

    Infrared based monocular relative navigation for active debris removal

    No full text
    In space, visual based relative navigation systems suffer from the harsh illumination conditions of the target (e.g. eclipse conditions, solar glare, etc.). In current Rendezvous and Docking (RvD) missions, most of these issues are addressed by advanced mission planning techniques (e.g strict manoeuvre timings). However, such planning would not always be feasible for Active Debris Removal (ADR) missions which have more unknowns. Fortunately, thermal infrared technology can operate under any lighting conditions and therefore has the potential to be exploited in the ADR scenario. In this context, this study investigates the benefits and the challenges of infrared based relative navigation. The infrared environment of ADR is very much different to that of terrestrial applications. This study proposes a methodology of modelling this environment in a computationally cost effective way to create a simulation environment in which the navigation solution can be tested. Through an intelligent classification of possible target surface coatings, the study is generalised to simulate the thermal environment of space debris in different orbit profiles. Through modelling various scenarios, the study also discusses the possible challenges of the infrared technology. In laboratory conditions, providing the thermal-vacuum environment of ADR, these theoretical findings were replicated. By use of this novel space debris set-up, the study investigates the behaviour of infrared cues extracted by different techniques and identifies the issue of short-lifespan features in the ADR scenarios. Based on these findings, the study suggests two different relative navigation methods based on the degree of target cooperativeness: partially cooperative targets, and uncooperative targets. Both algorithms provide the navigation solution with respect to an online reconstruction of the target. The method for partially cooperative targets provides a solution for smooth trajectories by exploiting the subsequent image tracks of features extracted from the first frame. The second algorithm is for uncooperative targets and exploits the target motion (e.g. tumbling) by formulating the problem in terms of a static target and a moving map (i.e. target structure) within a filtering framework. The optical flow information is related to the target motion derivatives and the target structure. A novel technique that uses the quality of the infrared cues to improve the algorithm performance is introduced. The problem of short measurement duration due to target tumbling motion is addressed by an innovative smart initialisation procedure. Both navigation solutions were tested in a number of different scenarios by using computer simulations and a specific laboratory set-up with real infrared camera. It is shown that these methods can perform well as the infrared-based navigation solutions using monocular cameras where knowledge relating to the infrared appearance of the target is limited
    • …
    corecore