44 research outputs found

    Lunar Terrain Relative Navigation Using a Convolutional Neural Network for Visual Crater Detection

    Full text link
    Terrain relative navigation can improve the precision of a spacecraft's position estimate by detecting global features that act as supplementary measurements to correct for drift in the inertial navigation system. This paper presents a system that uses a convolutional neural network (CNN) and image processing methods to track the location of a simulated spacecraft with an extended Kalman filter (EKF). The CNN, called LunaNet, visually detects craters in the simulated camera frame and those detections are matched to known lunar craters in the region of the current estimated spacecraft position. These matched craters are treated as features that are tracked using the EKF. LunaNet enables more reliable position tracking over a simulated trajectory due to its greater robustness to changes in image brightness and more repeatable crater detections from frame to frame throughout a trajectory. LunaNet combined with an EKF produces a decrease of 60% in the average final position estimation error and a decrease of 25% in average final velocity estimation error compared to an EKF using an image processing-based crater detection method when tested on trajectories using images of standard brightness.Comment: 6 pages, 4 figures. This work was accepted by the 2020 American Control Conferenc

    Progress on GPS-Denied, Multi-Vehicle, Fixed-Wing Cooperative Localization

    Get PDF
    This paper first summarizes recent results of a proposed method for multiple, small, fixed-wing aircraft cooperatively localizing in GPS-denied environments. It then provides a significant future works discussion to provide a vision for the future of cooperative navigation. The goal of this work is to show that many, small, potentially-lower-cost vehicles could collaboratively localize better than a single, more-accurate, higher-cost GPS-denied system. This work is guided by a novel methodology called relative navigation, which has been developed in prior work. Initial work focused on the development and testing of a monocular, visual-inertial odometry for fixed-wing aircraft that accounts for fixed-wing flight characteristics and sensing requirements. The front-end publishes information that enables a back-end where the odometry from multiple vehicles is combined with inter-vehicle measurements and is communicated and shared between vehicles. Each vehicle is able to create a global, backend, graph-based map and optimize it as new information is gained and measurements between vehicles overconstrain the graph. These inter-vehicle measurements allow the optimization to remove accumulated drift for more accurate estimates

    NeRF-VINS: A Real-time Neural Radiance Field Map-based Visual-Inertial Navigation System

    Full text link
    Achieving accurate, efficient, and consistent localization within an a priori environment map remains a fundamental challenge in robotics and computer vision. Conventional map-based keyframe localization often suffers from sub-optimal viewpoints due to limited field of view (FOV), thus degrading its performance. To address this issue, in this paper, we design a real-time tightly-coupled Neural Radiance Fields (NeRF)-aided visual-inertial navigation system (VINS), termed NeRF-VINS. By effectively leveraging NeRF's potential to synthesize novel views, essential for addressing limited viewpoints, the proposed NeRF-VINS optimally fuses IMU and monocular image measurements along with synthetically rendered images within an efficient filter-based framework. This tightly coupled integration enables 3D motion tracking with bounded error. We extensively compare the proposed NeRF-VINS against the state-of-the-art methods that use prior map information, which is shown to achieve superior performance. We also demonstrate the proposed method is able to perform real-time estimation at 15 Hz, on a resource-constrained Jetson AGX Orin embedded platform with impressive accuracy.Comment: 6 pages, 7 figure

    Closed-Form Solution to the Structure from Motion Problem by Fusing Visual and Inertial Sensing

    Get PDF
    The structure from motion problem (SfM) consists in determining the three-dimensional structure of the scene by using the measurements provided by one or more sensors over time (e.g. vision sensors, ego-motion sensors, range sensors). Solving this problem consists in simultaneously performing self motion perception (sMP) and depth perception (DP). In the case of visual measurements only, the SfM has been solved up to a scale \cite{Chi02,Dav07,Har97,Lon81,Nis04} and a closed form solution has also been derived \cite{Har97,Lon81,Nis04}, allowing the determination of the three-dimensional structure of the scene, without the need for any prior knowledge. The case of inertial and visual measurements has particular interest and has been investigated by many disciplines, both in the framework of computer science \cite{Bry08,Jon11,Kelly11,Stre04} and in the framework of neuroscience (the visual-vestibular integration for sMP \cite{Bert75,Fets10,Mac08,Zup02} and for DP \cite{Dokka11}). Prior work has answered the question of which are the observable modes, i.e. the states that can be determined by fusing visual and inertial measurements \cite{Bry08,Jon11,Kelly11,INRIA11,TRO12}. The questions of how to compute these states in the absence of a prior, and of how many solutions are possible, have only been answered very recently \cite{INRIA11,TRO12}. Here we derive a very simple and intuitive derivation of the solution introduced in \cite{INRIA11,TRO12}. We show that the SfM problem can have a unique solution or two distinct solutions or infinite solutions depending on the trajectory, on the number of point-features and on the number of monocular images where the same point-features are seen. Our results are relevant in all the applications which need to solve the SfM problem with low-cost sensors and which do not demand any infrastructure. Additionally, our results could play an important role in neuroscience by providing a new insight on the process of vestibular and visual integration

    Multi-Camera Visual-Inertial Simultaneous Localization and Mapping for Autonomous Valet Parking

    Full text link
    Localization and mapping are key capabilities for self-driving vehicles. In this paper, we build on Kimera and extend it to use multiple cameras as well as external (eg wheel) odometry sensors, to obtain accurate and robust odometry estimates in real-world problems. Additionally, we propose an effective scheme for closing loops that circumvents the drawbacks of common alternatives based on the Perspective-n-Point method and also works with a single monocular camera. Finally, we develop a method for dense 3D mapping of the free space that combines a segmentation network for free-space detection with a homography-based dense mapping technique. We test our system on photo-realistic simulations and on several real datasets collected on a car prototype developed by the Ford Motor Company, spanning both indoor and outdoor parking scenarios. Our multi-camera system is shown to outperform state-of-the art open-source visual-inertial-SLAM pipelines (Vins-Fusion, ORB-SLAM3), and exhibits an average trajectory error under 1% of the trajectory length across more than 8km of distance traveled (combined across all datasets). A video showcasing the system is available at: youtu.be/H8CpzDpXOI8

    Toward an Autonomous Lunar Landing Based on Low-Speed Optic Flow Sensors

    No full text
    International audienceFor the last few decades, growing interest has returned to the quite chal-lenging task of the autonomous lunar landing. Soft landing of payloads on the lu-nar surface requires the development of new means of ensuring safe descent with strong final conditions and aerospace-related constraints in terms of mass, cost and computational resources. In this paper, a two-phase approach is presented: first a biomimetic method inspired from the neuronal and sensory system of flying insects is presented as a solution to perform safe lunar landing. In order to design an au-topilot relying only on optic flow (OF) and inertial measurements, an estimation method based on a two-sensor setup is introduced: these sensors allow us to accu-rately estimate the orientation of the velocity vector which is mandatory to control the lander's pitch in a quasi-optimal way with respect to the fuel consumption. Sec-ondly a new low-speed Visual Motion Sensor (VMS) inspired by insects' visual systems performing local angular 1-D speed measurements ranging from 1.5 • /s to 25 • /s and weighing only 2.8 g is presented. It was tested under free-flying outdoor conditions over various fields onboard an 80 kg unmanned helicopter. These pre-liminary results show that the optic flow measured despite the complex disturbances encountered closely matched the ground-truth optic flow

    Towards consistent visual-inertial navigation

    Get PDF
    Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.United States. Office of Naval Research (N00014-12-1- 0093, N00014-10-1-0936, N00014-11-1-0688, and N00014-13-1-0588)National Science Foundation (U.S.) (Grant IIS-1318392

    Observer-based Controller for VTOL-UAVs Tracking using Direct Vision-Aided Inertial Navigation Measurements

    Full text link
    This paper proposes a novel observer-based controller for Vertical Take-Off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) designed to directly receive measurements from a Vision-Aided Inertial Navigation System (VA-INS) and produce the required thrust and rotational torque inputs. The VA-INS is composed of a vision unit (monocular or stereo camera) and a typical low-cost 6-axis Inertial Measurement Unit (IMU) equipped with an accelerometer and a gyroscope. A major benefit of this approach is its applicability for environments where the Global Positioning System (GPS) is inaccessible. The proposed VTOL-UAV observer utilizes IMU and feature measurements to accurately estimate attitude (orientation), gyroscope bias, position, and linear velocity. Ability to use VA-INS measurements directly makes the proposed observer design more computationally efficient as it obviates the need for attitude and position reconstruction. Once the motion components are estimated, the observer-based controller is used to control the VTOL-UAV attitude, angular velocity, position, and linear velocity guiding the vehicle along the desired trajectory in six degrees of freedom (6 DoF). The closed-loop estimation and the control errors of the observer-based controller are proven to be exponentially stable starting from almost any initial condition. To achieve global and unique VTOL-UAV representation in 6 DoF, the proposed approach is posed on the Lie Group and the design in unit-quaternion is presented. Although the proposed approach is described in a continuous form, the discrete version is provided and tested. Keywords: Vision-aided inertial navigation system, unmanned aerial vehicle, vertical take-off and landing, stochastic, noise, Robotics, control systems, air mobility, observer-based controller algorithm, landmark measurement, exponential stability
    corecore