15,404 research outputs found
Recommended from our members
High-speed multi-dimensional relative navigation for uncooperative space objects
This work proposes a high-speed Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture transforms the odometry problem from the 3D space into multiple 2.5D ones and completes the odometry problem by utilizing a recursive filtering scheme. Trials evaluate several current state-of-the-art 2D keypoint detection and local feature description methods as well as recursive filtering techniques on a number of simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Most appealing performance is attained by the 2D keypoint detector Good Features to Track (GFFT) combined with the feature descriptor KAZE, that are further combined with either the H∞ or the Kalman recursive filter. Experimental results demonstrate that compared to current algorithms, the GFTT/KAZE combination is highly appealing affording one order of magnitude more accurate odometry and a very low processing burden, which depending on the competitor method, may exceed one order of magnitude faster computation
An Effective Multi-Cue Positioning System for Agricultural Robotics
The self-localization capability is a crucial component for Unmanned Ground
Vehicles (UGV) in farming applications. Approaches based solely on visual cues
or on low-cost GPS are easily prone to fail in such scenarios. In this paper,
we present a robust and accurate 3D global pose estimation framework, designed
to take full advantage of heterogeneous sensory data. By modeling the pose
estimation problem as a pose graph optimization, our approach simultaneously
mitigates the cumulative drift introduced by motion estimation systems (wheel
odometry, visual odometry, ...), and the noise introduced by raw GPS readings.
Along with a suitable motion model, our system also integrates two additional
types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random
Field assumption. We demonstrate how using these additional cues substantially
reduces the error along the altitude axis and, moreover, how this benefit
spreads to the other components of the state. We report exhaustive experiments
combining several sensor setups, showing accuracy improvements ranging from 37%
to 76% with respect to the exclusive use of a GPS sensor. We show that our
approach provides accurate results even if the GPS unexpectedly changes
positioning mode. The code of our system along with the acquired datasets are
released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters,
201
Pose and Shape Reconstruction of a Noncooperative Spacecraft Using Camera and Range Measurements
Recent interest in on-orbit proximity operations has pushed towards the development of autonomous GNC strategies. In this sense, optical navigation enables a wide variety of possibilities as it can provide information not only about the kinematic state but also about the shape of the observed object. Various mission architectures have been either tested in space or studied on Earth. The present study deals with on-orbit relative pose and shape estimation with the use of a monocular camera and a distance sensor. The goal is to develop a filter which estimates an observed satellite's relative position, velocity, attitude, and angular velocity, along with its shape, with the measurements obtained by a camera and a distance sensor mounted on board a chaser which is on a relative trajectory around the target. The filter's efficiency is proved with a simulation on a virtual target object. The results of the simulation, even though relevant to a simplified scenario, show that the estimation process is successful and can be considered a promising strategy for a correct and safe docking maneuver
A new method to determine multi-angular reflectance factor from lightweight multispectral cameras with sky sensor in a target-less workflow applicable to UAV
A new physically based method to estimate hemispheric-directional reflectance
factor (HDRF) from lightweight multispectral cameras that have a downwelling
irradiance sensor is presented. It combines radiometry with photogrammetric
computer vision to derive geometrically and radiometrically accurate data
purely from the images, without requiring reflectance targets or any other
additional information apart from the imagery. The sky sensor orientation is
initially computed using photogrammetric computer vision and revised with a
non-linear regression comprising radiometric and photogrammetry-derived
information. It works for both clear sky and overcast conditions. A
ground-based test acquisition of a Spectralon target observed from different
viewing directions and with different sun positions using a typical
multispectral sensor configuration for clear sky and overcast showed that both
the overall value and the directionality of the reflectance factor as reported
in the literature were well retrieved. An RMSE of 3% for clear sky and up to 5%
for overcast sky was observed
A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations
Automated satellite proximity operations is an increasingly relevant area of mission operations for the US Air Force with potential to significantly enhance space situational awareness (SSA). Simultaneous localization and mapping (SLAM) is a computer vision method of constructing and updating a 3D map while keeping track of the location and orientation of the imaging agent inside the map. The main objective of this research effort is to design a monocular SLAM method customized for the space environment. The method developed in this research will be implemented in an indoor proximity operations simulation laboratory. A run-time analysis is performed, showing near real-time operation. The method is verified by comparing SLAM results to truth vertical rotation data from a CubeSat air bearing testbed. This work enables control and testing of simulated proximity operations hardware in a laboratory environment. Additionally, this research lays the foundation for autonomous satellite proximity operations with unknown targets and minimal additional size, weight, and power requirements, creating opportunities for numerous mission concepts not previously available
Pose Performance of LIDAR-Based Relative Navigation for Non-Cooperative Objects
Flash LIDAR is an important new sensing technology for relative navigation; these sensors have shown promising results during rendezvous and docking applications involving a cooperative vehicle. An area of recent interest is the application of this technology for pose estimation with non-cooperative client vehicles, in support of on-orbit satellite servicing activities and asteroid redirect missions. The capability for autonomous rendezvous with non-cooperative satellites will enable refueling and servicing of satellites (particularly those designed without servicing in mind), allowing these vehicles to continue operating rather than being retired. Rendezvous with an asteroid will give further insight to the origin of individual asteroids. This research investigates numerous issues surrounding pose performance using LIDAR. To begin analyzing the characteristics of the data produced by Flash LIDAR, simulated and laboratory testing have been completed. Observations of common asteroid materials were made with a surrogate LIDAR, characterizing the reflectivity of the materials. A custom Iterative Closest Point (ICP) algorithm was created to estimate the relative position and orientation of the LIDAR relative to the observed object. The performance of standardized pose estimation techniques (including ICP) has been examined using non-cooperative data as well as the characteristics of the materials that will potentially be observed during missions. For the hardware tests, a SwissRanger ToF camera was used as a surrogate Flash LIDAR
Relative pose determination algorithm for space on-orbit close range autonomous operation using LiDAR
Non cooperative on-orbit operations, such as rendezvous, docking or berthing operations, have become more relevant, mainly due to the necessity of expanding mission lifetimes, the increase of space debris and the reduction of human dependency. In order to automate these operations, the relative pose calculation between the target and the chaser must be determined autonomously. In recent years, LiDAR sensors have been introduced for this problem, achieving good accuracies. The critical part of this operation is the first relative pose calculation, since there is no previous information about the attitude of the target. In this work, a methodology to carry out this first relative pose calculation using LiDAR sensors is presented. A template matching algorithm has been developed, which uses the 3D model of the target to calculate the relative pose of the target regarding the LiDAR sensor. Three different study cases, with different distances and rotations, have been simulated in order to validate the algorithm, reaching an average error of 0.0383m
- …