6 research outputs found

    Incorporating delayed and multi-rate measurements in navigation filter for autonomous space rendezvous

    Get PDF
    In the scope of space missions involving rendezvous between a chaser and a target, vision based navigation relies on the use of optical sensors coupled with image processing and computer vision algorithms to obtain a measurement of the target relative pose. These algorithms usually have high latency time, implying that the chaser navigation filter has to fuse delayed and multi-rate measurements. This article has two main contributions: it provides a detailed modelization of the relative dynamics within the estimation filter, and it proposes a comparison of two delay management techniques suitable for this application. The selected methods are the Filter Recalculation method -which always provides an optimal estimation at the expense of a high computational load- and the Larsen’s method -which provides a faster solution whose optimality lies on stronger requirements. The application of these techniques to the space rendezvous problem is discussed and formalized. Finally, the current article proposes a comparison of the methods based on a Monte-Carlo campaign, aimed at demonstrating whether the loss of performance of Larsen’s method due to its sub-optimality still enables target state robust tracking

    Vision-based navigation for autonomous space rendezvous with non-cooperative targets

    Get PDF
    This study addresses the issue of vision-based navigation for space rendezvous with non-cooperative targets. After a brief description of the scenario and its peculiarities, the theory underlying monocular edges-based tracking for pose estimation is recalled and an innovative tracking algorithm is formally developed and implemented. This algorithm is coupled with a dynamic Kalman Filter propagating the dynamics which underlies a space rendezvous. The navigation filter increases the robustness of target position and attitude estimation, and allows the estimation of target translational velocity and rotation rate using only pose measurements. Moreover, the filter implements a computationally efficient delay management technique that allows merging the delayed and infrequent measurements typical of vision-based navigation. The performance of the algorithm is tested in different scenarios with high fidelity synthetic images

    Rendez-vous autonomes basés vision avec cibles non coopératives

    No full text
    L’objectif de cette thèse est de proposer une solution complète basée sur la vision pour permettre la navigation autonome d’un vaisseau de poursuite (S/C) lors d’opérations de proximité dans l’espace de rendez-vous (RDV) avec une cible non coopérative en utilisant une caméra monoculaire visible.Le rendez-vous autonome est une capacité clé pour répondre aux principaux défis de l’ingénierie spatiale, tels que l’enlèvement actif des débris (ADR) et l’entretien en orbite(OOS). L’ADR vise à éliminer les débris spatiaux, dans les régions protégées en orbite basse, qui sont les plus susceptibles d’entraîner des collisions futures et d’alimenter le syndrome de Kessler, augmentant ainsi le risque pour les engins spatiaux opérationnels.L’OOS comprend des services d’inspection, d’entretien, de réparation, d’assemblage, de ravitaillement et de prolongation de la durée de vie des satellites ou structures en orbite.Lors d’un RDV autonome avec une cible non coopérative, c’est-`a-dire une cible qui n’aide pas / n’interagit pas le chasseur dans les opérations d’acquisition, de poursuite et de rendez-vous, le chasseur doit estimer l’état de la cible `a bord de manière autonome.Les opérations de rendez-vous autonomes nécessitent des mesures précises et actualisées de la pose relative (c’est-à-dire la position et l’attitude de la cible), et la combinaison de capteurs de caméra avec des algorithmes de poursuite peut constituer une solution rentable.La recherche a été divisée en trois études principales : le développement d’un algorithme permettant l’acquisition de la pose initiale (c’est-à-dire la détermination de la pose sans aucune connaissance préalable de cette pose aux instants précédents), le développement d’un algorithme de poursuite récursif (c’est-à-dire d’un algorithme qui exploite les informations sur l’état de la cible à l’instant précédent pour calculer la mise à jour de la pose à l’instant actuel), et le développement d’un filtre de navigation intégrant les mesures provenant de différents capteurs et/ou algorithmes, avec différents taux et délais.En ce qui concerne la phase d’acquisition de la pose, un nouvel algorithme de détection a été développé pour permettre une initialisation rapide de la pose. Une approche est proposée pour récupérer entièrement la pose de la cible en utilisant un ensemble d’invariants et de moments géométriques (c’est-à-dire des caractéristiques globales) calculés à partir des images de la silhouette de la cible. Les caractéristiques globales synthétisent le contenu de l’image dans un vecteur de quelques descripteurs qui changent de valeurs en fonction de la pose relative de la cible. Une base de données des caractéristiques globales est pré-calculée hors ligne en utilisant le modèle géométrique de la cible afin de couvrir tout l’espace de la solution. Au moment de l’exécution, les caractéristiques globales sont calculées sur l’image actuelle acquise et comparées avec la base de données. Différents ensembles de caractéristiques globales ont été comparés afin de sélectionner les plus performants,ce qui a permis d’obtenir un algorithme de détection robuste avec une faible charge de calcul.The aim of this thesis is to propose a full vision-based solution to enable autonomousnavigation of a chaser spacecraft (S/C) during close-proximity operations in space rendezvous(RDV) with a non-cooperative target using a visible monocular camera.Autonomous rendezvous is a key capability to answer main challenges in space engineering,such as Active Debris Removal (ADR) and On-Orbit-Servicing (OOS). ADR aimsat removing the space debris, in low-Earth-orbit protected region, that are more likelyto lead to future collision and feed the Kessler syndrome, thus increasing the risk foroperative spacecrafts. OOS includes inspection, maintenance, repair, assembly, refuelingand life extension services to orbiting S/C or structures. During an autonomous RDVwith a non-cooperative target, i.e., a target that does not assist the chaser in acquisition,tracking and rendezvous operations, the chaser must estimate the target’s state on-boardautonomously. Autonomous RDV operations require accurate, up-to-date measurementsof the relative pose (i.e., position and attitude) of the target, and the combination ofcamera sensors with tracking algorithms can provide a cost effective solution.The research has been divided into three main studies: the development of an algorithmenabling the initial pose acquisition (i.e., the determination of the pose without any priorknowledge of the pose of the target at the previous instants), the development of a recursivetracking algorithm (i.e., an algorithm which exploits the information about thestate of the target at the previous instant to compute the pose update at the currentinstant), and the development of a navigation filter integrating the measurements comingfrom different sensor and/or algorithms, with different rates and delays.For what concerns the pose acquisition phase, a novel detection algorithm has been developedto enable fast pose initialization. An approach is proposed to fully retrieve theobject’s pose using a set of invariants and geometric moments (i.e., global features) computedusing the silhouette images of the target. Global features synthesize the content ofthe image in a vector of few descriptors which change values as a function of the targetrelative pose. A database of global features is pre-computed offline using the target geometricalmodel in order to cover all the solution space. At run-time, global features arecomputed on the current acquired image and compared with the database. Different setsof global features have been compared in order to select the more performing, resultingin a robust detection algorithm having a low computational load.Once an initial estimate of the pose is acquired, a recursive tracking algorithm is initialized.The algorithm relies on the detection and matching of the observed silhouettecontours with the 3D geometric model of the target, which is projected into the imageframe using the estimated pose at the previous instant. Then, the summation of the distances between each projected model points and the matched image points is written as a non-linear function of the unknown pose parameters. The minimization of this costfunction enables the estimation of the pose at the current instant. This algorithm providesfast and very accurate measurements of the relative pose of the target. However,as other recursive trackers, it is prone to divergence. Thus, the detection algorithm isrun in parallel to the tacker in order to provide corrected measurements in case of trackerdivergences. The measurements are then integrated into the chaser navigation filter to provide anoptimal and robust estimate. Vision-based navigation algorithms provide only pose measurements.

    Incorporating delayed and multi-rate measurements in navigation filter for autonomous space rendezvous

    No full text
    International audienceIn the scope of space missions involving rendezvous between a chaser and a target, vision based navigation relies on the use of optical sensors coupled with image processing and computer vision algorithms to obtain a measurement of the target relative pose. These algorithms usually have high latency time, implying that the chaser navigation filter has to fuse delayed and multi-rate measurements. This article has two main contributions: it provides a detailed modelization of the relative dynamics within the estimation filter, and it proposes a comparison of two delay management techniques suitable for this application. The selected methods are the Filter Recalculation method -which always provides an optimal estimation at the expense of a high computational load- and the Larsen’s method -which provides a faster solution whose optimality lies on stronger requirements. The application of these techniques to the space rendezvous problem is discussed and formalized. Finally, the current article proposes a comparison of the methods based on a Monte-Carlo campaign, aimed at demonstrating whether the loss of performance of Larsen’s method due to its sub-optimality still enables target state robust tracking

    Global Descriptors for Visual Pose Estimation of a Non-Cooperative Target in Space Rendezvous

    No full text
    This paper proposes methods based on global descriptors to estimate the pose of a known object using a monocular camera, in the context of space rendezvous between an autonomous spacecraft and a non-cooperative target. These methods estimate the pose by detection, i.e., they require no prior information about the pose of the observed object, making them suitable for initial pose acquisition and the monitoring of faults in other on-board estimators. An approach is presented to fully retrieve the object’s pose using a pre-computed set of invariants and geometric moments. Three classes of global invariant features are analyzed, based on complex moments, Zernike moments and Fourier descriptors. The robustness of the different invariants is tested under various conditions and their performance is discussed and compared. The method offers a fast and robust solution for pose estimation by detection, with a low computational complexity that is compatible with space-qualified processors

    Robust Navigation Solution for Vision-Based Autonomous Rendezvous

    Get PDF
    This paper proposes Thales Alenia Space vision-based navigation solution for close proximity operations in autonomous space rendezvous with non-cooperative targets. The proposed solution covers all the phases of the navigation. First, a neural network robustly extracts the target silhouette from complex background. Then, the binary silhouette is used to retrieve the initial relative pose using a detection algorithm. We propose an innovative approach to retrieve the object's pose using a precomputed set of invariants and geometric moments. The observation is extended over a set of consecutive frames in order to allow the rejection of outlying measurements and to obtain a robust pose initialization. Once an initial estimate of the pose is acquired, a recursive tracking algorithm based on the extraction and matching of the observed silhouette contours with the 3D geometric model of the target is initialized. The detection algorithm is run in parallel to the tracker in order to correct the tracking in case of diverging measurements. The measurements are then integrated into a dynamic filter, increasing the robustness of target pose estimation, allowing the estimation of target translational velocity and rotation rate, and implementing a computationally efficient delay management technique that allows merging delayed and infrequent measurements. The overall Navigation solution has a low computational load, which makes it compatible with space-qualified microprocessors. The solution is tested and validated in different close proximity scenarios using synthetic images generated with Thales Alenia Space rendering engine SpiCam
    corecore