333 research outputs found
A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations
Automated satellite proximity operations is an increasingly relevant area of mission operations for the US Air Force with potential to significantly enhance space situational awareness (SSA). Simultaneous localization and mapping (SLAM) is a computer vision method of constructing and updating a 3D map while keeping track of the location and orientation of the imaging agent inside the map. The main objective of this research effort is to design a monocular SLAM method customized for the space environment. The method developed in this research will be implemented in an indoor proximity operations simulation laboratory. A run-time analysis is performed, showing near real-time operation. The method is verified by comparing SLAM results to truth vertical rotation data from a CubeSat air bearing testbed. This work enables control and testing of simulated proximity operations hardware in a laboratory environment. Additionally, this research lays the foundation for autonomous satellite proximity operations with unknown targets and minimal additional size, weight, and power requirements, creating opportunities for numerous mission concepts not previously available
Satellite Articulation Sensing using Computer Vision
Autonomous on-orbit satellite servicing benefits from an inspector satellite that can gain as much information as possible about the primary satellite. This includes performance of articulated objects such as solar arrays, antennas, and sensors. A method for building an articulated model from monocular imagery using tracked feature points and the known relative inspection route is developed. Two methods are also developed for tracking the articulation of a satellite in real-time given an articulated model using both tracked feature points and image silhouettes. Performance is evaluated for multiple inspection routes and the effect of inspection route noise is assessed. Additionally, a satellite model is built and used to collect stop-motion images simulating articulated motion over an inspection route under simulated space illumination. The images are used in the silhouette articulation tracking method and successful tracking is demonstrated qualitatively. Finally, a human pose tracking algorithm is modified for tracking the satellite articulation demonstrating the applicability of human tracking methods to satellite articulation tracking methods when an articulated model is available
Infrared based monocular relative navigation for active debris removal
In space, visual based relative navigation systems suffer from the harsh illumination conditions of the target (e.g. eclipse conditions, solar glare, etc.). In current Rendezvous and Docking (RvD) missions, most of these issues are addressed by advanced mission planning techniques (e.g strict manoeuvre timings). However, such planning would not always be feasible for Active Debris Removal (ADR) missions which have more unknowns. Fortunately, thermal infrared technology can operate under any lighting conditions and therefore has the potential to be exploited in the ADR scenario. In this context, this study investigates the benefits and the challenges of infrared based relative navigation. The infrared environment of ADR is very much different to that of terrestrial applications. This study proposes a methodology of modelling this environment in a computationally cost effective way to create a simulation environment in which the navigation solution can be tested. Through an intelligent classification of possible target surface coatings, the study is generalised to simulate the thermal environment of space debris in different orbit profiles. Through modelling various scenarios, the study also discusses the possible challenges of the infrared technology. In laboratory conditions, providing the thermal-vacuum environment of ADR, these theoretical findings were replicated. By use of this novel space debris set-up, the study investigates the behaviour of infrared cues extracted by different techniques and identifies the issue of short-lifespan features in the ADR scenarios. Based on these findings, the study suggests two different relative navigation methods based on the degree of target cooperativeness: partially cooperative targets, and uncooperative targets. Both algorithms provide the navigation solution with respect to an online reconstruction of the target. The method for partially cooperative targets provides a solution for smooth trajectories by exploiting the subsequent image tracks of features extracted from the first frame. The second algorithm is for uncooperative targets and exploits the target motion (e.g. tumbling) by formulating the problem in terms of a static target and a moving map (i.e. target structure) within a filtering framework. The optical flow information is related to the target motion derivatives and the target structure. A novel technique that uses the quality of the infrared cues to improve the algorithm performance is introduced. The problem of short measurement duration due to target tumbling motion is addressed by an innovative smart initialisation procedure. Both navigation solutions were tested in a number of different scenarios by using computer simulations and a specific laboratory set-up with real infrared camera. It is shown that these methods can perform well as the infrared-based navigation solutions using monocular cameras where knowledge relating to the infrared appearance of the target is limited
Relative Navigation Strategy About Unknown and Uncooperative Targets
In recent years, space debris has become a threat for satellites operating in low Earth orbit. Even by applying mitigation guidelines, their number will still increase over the course of the century. As a consequence, active debris removal missions and on-orbit servicing missions have gained momentum at both academic and industrial level. The crucial step in both scenarios is the capability of navigating in the neighborhood of a target resident space object. This problem has been tackled many times in literature with varying level of cooperativeness of the target required. While several techniques are available when the target is cooperative or its shape is known, no approach is mature enough to deal with uncooperative and unknown targets. This paper proposes a hybrid method to tackle this issue called Coarse Model-Based Relative Navigation (CoMBiNa). The main idea of this algorithm is to split the mission into two phases. During the first phase, the algorithm constructs a coarse model of the target. In the second phase, this coarse model is used as a reference for a relative navigation technique, effectively shifting the focus toward state and inertia estimation. In addition, this paper proposes a strategy to leverage the structure of the selected navigation method to detect and reject outliers. To conclude, CoMBiNa is tested on a simulated environment to highlight its benefits and its shortcomings, while also assessing its applicability on a limited-resource single-board computer
Monocular-Based Pose Determination of Uncooperative Space Objects
Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)
Survey of computer vision algorithms and applications for unmanned aerial vehicles
This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)
Recommended from our members
FPGA-based multi-sensor relative navigation in space: Preliminary analysis in the framework of the I3DS H2020 project
The Horizon 2020 Integrated 3D Sensors (I3DS) project brings together the following entities throughout Europe: THALES ALENIA SPACE - France / Italy / UK / Spain, SINTEF (Norway), TERMA (Denmark), COSINE (Netherlands), PIAP Space (Poland), HERTZ Systems (Poland), and Cranfield University (UK). I3DS is co-funded under the Horizon 2020 EU research and development program and is part of the Strategic Research Cluster on Space Robotics Technologies. The ambition of I3DS is to produce a standardised modular Inspector Sensor Suite (INSES) for autonomous orbital and planetary applications for future space missions. Orbital applications encompass activities such as on-orbit servicing and repair, space rendezvous and docking, collision avoidance and active debris removal (ADR). Simultaneous localisation and surface mapping (SLAM) for planetary exploration and general navigation in an unknown environment for scientific purposes can be considered in planetary applications. These envisaged space applications can be tackled by exploiting the flexibility, high performance and long product life of FPGAs. Conventional FPGAs are subject to Single Event Upsets (SEU) due to space radiation, causing their failure. Therefore, space-graded FPGAs, such as those developed by Xilinx, are targeted within the I3DS project. Currently, the main use of the FPGA within the development of this robust end-to-end multi-sensor suite is for navigation and data preprocessing. The aim of this paper is to assess the capabilities of FPGAs to carry out complex operations, such as running navigation algorithms for space applications. The motivation for the development of the on-board software architecture is as follows: raw data, acquired from the various sensors – including, among others, a High Resolution camera, a stereo camera and a LiDAR – is pre-processed to ensure the provision of robust and optimised inputs to 3D navigation algorithms. Noise reduction and conversion into suitable formats for the successful application of navigation algorithms are therefore the main aims of the data pre-processing. Some techniques adopted in this phase include outlier rejection and data dimensionality reduction for large point clouds, e.g. from LiDAR, and geometric and radiometric correction of the images from the cameras. The pre-processed data will then feed state-of-the-art relative navigation algorithms. Some of the proposed navigation algorithms include Generalised Iterative Closest Point (GICP) for dense 3D point clouds, relative positioning with fiducial markers, and visual odometry. The system environment for the preliminary operation is a test-bench setup formed by a standard desktop computer and a non-space-graded FGA (Xilinx UltraZed-EG FPGA). The choice of FPGA was based on the similarity of this board to other spacegraded ones also provided by Xilinx. Experimental tests on the algorithms are being performed in the framework of the validation campaign for the I3DS project. Preliminary results indicate that the data pre-processing can be efficiently carried out on the FPGA board
Advanced LIDAR-based techniques for autonomous navigation of spaceborne and airborne platforms
The main goal of this PhD thesis is the development and performance assessment of innovative techniques for the autonomous navigation of aerospace platforms by exploiting data acquired by electro-optical sensors. Specifically, the attention is focused on active LIDAR systems since they globally provide a higher degree of autonomy with respect to passive sensors. Two different areas of research are addressed, namely the autonomous relative navigation of multi-satellite systems and the autonomous navigation of Unmanned Aerial Vehicles. The global aim is to provide solutions able to improve estimation accuracy, computational load, and overall robustness and reliability with respect to the techniques available in the literature.
In the space field, missions like on-orbit servicing and active debris removal require a chaser satellite to perform autonomous orbital maneuvers in close-proximity of an uncooperative space target. In this context, a complete pose determination architecture is here proposed, which relies exclusively on three-dimensional measurements (point clouds) provided by a LIDAR system as well as on the knowledge of the target geometry. Customized solutions are envisaged at each step of the pose determination process (acquisition, tracking, refinement) to ensure adequate accuracy level while simultaneously limiting the computational load with respect to other approaches available in the literature. Specific strategies are also foreseen to ensure process robustness by autonomously detecting algorithms' failures. Performance analysis is realized by means of a simulation environment which is conceived to realistically reproduce LIDAR operation, target geometry, and multi-satellite relative dynamics in close-proximity. An innovative method to design trajectories for target monitoring, which are reliable for on-orbit servicing and active debris removal applications since they satisfy both safety and observation requirements, is also presented.
On the other hand, the problem of localization and mapping of Unmanned Aerial Vehicles is also tackled since it is of utmost importance to provide autonomous safe navigation capabilities in mission scenarios which foresee flights in complex environments, such as GPS denied or challenging. Specifically, original solutions are proposed for the localization and mapping steps based on the integration of LIDAR and inertial data. Also in this case, particular attention is focused on computational load and robustness issues. Algorithms' performance is evaluated through off-line simulations carried out on the basis of experimental data gathered by means of a purposely conceived setup within an indoor test scenario
- …