13,915 research outputs found

    GSLAM: Initialization-robust Monocular Visual SLAM via Global Structure-from-Motion

    Full text link
    Many monocular visual SLAM algorithms are derived from incremental structure-from-motion (SfM) methods. This work proposes a novel monocular SLAM method which integrates recent advances made in global SfM. In particular, we present two main contributions to visual SLAM. First, we solve the visual odometry problem by a novel rank-1 matrix factorization technique which is more robust to the errors in map initialization. Second, we adopt a recent global SfM method for the pose-graph optimization, which leads to a multi-stage linear formulation and enables L1 optimization for better robustness to false loops. The combination of these two approaches generates more robust reconstruction and is significantly faster (4X) than recent state-of-the-art SLAM systems. We also present a new dataset recorded with ground truth camera motion in a Vicon motion capture room, and compare our method to prior systems on it and established benchmark datasets.Comment: 3DV 2017 Project Page: https://frobelbest.github.io/gsla

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    A Monocular SLAM Method to Estimate Relative Pose During Satellite Proximity Operations

    Get PDF
    Automated satellite proximity operations is an increasingly relevant area of mission operations for the US Air Force with potential to significantly enhance space situational awareness (SSA). Simultaneous localization and mapping (SLAM) is a computer vision method of constructing and updating a 3D map while keeping track of the location and orientation of the imaging agent inside the map. The main objective of this research effort is to design a monocular SLAM method customized for the space environment. The method developed in this research will be implemented in an indoor proximity operations simulation laboratory. A run-time analysis is performed, showing near real-time operation. The method is verified by comparing SLAM results to truth vertical rotation data from a CubeSat air bearing testbed. This work enables control and testing of simulated proximity operations hardware in a laboratory environment. Additionally, this research lays the foundation for autonomous satellite proximity operations with unknown targets and minimal additional size, weight, and power requirements, creating opportunities for numerous mission concepts not previously available

    Comparison of Three Machine Vision Pose Estimation Systems Based on Corner, Line, and Ellipse Extraction for Satellite Grasping

    Get PDF
    The primary objective of this research was to use three different types of features (corners, lines, and ellipses) for the purpose of satellite grasping with a machine vision-based pose estimation system. The corner system is used to track sharp corners or small features (holes or bolt) in the satellite; the lines system tracks sharp edges while the ellipse system tracks circular features in the satellite. The corner and line system provided 6 degrees of freedom (DOF) pose (rotation matrix and translation vector) of the satellite with respect to the camera frame, while the ellipse system provided 5 DOF pose (normal vector and center position) of the circular feature with respect to the camera frame. Satellite grasping is required for on-orbit satellite servicing and refueling. Three machine vision estimation systems (base on line, corner, and ellipse extraction) were studied and compared using a simulation environment. The corner extraction system was based on the Shi-Tomasi method; the line extraction system was based on the Hough transform; while the ellipse system is based on the fast ellipse extractor. Each system tracks its corresponding most prominent feature of the satellite. In order to evaluate the performance of each position estimation system, six maneuvers, three in translation (xyz) and three in rotation (roll pitch yaw), three different initial positions, and three different levels of Gaussian noise were considered in the virtual environment. Also, a virtual and real approach using a robotic manipulator sequence was performed in order to predict how each system could perform in a real application. Each system was compared using the mean and variance of the translational and rotational position estimation error. The virtual environment features a CAD model of a satellite created using SolidWorks which contained three common satellite features; that is a square plate, a marman ring, and a thruster. The corner and line pose estimation systems increased accuracy and precision as the distance decreases allowing for up to 2 centimeters of accuracy in translation. However, under heavy noise situations the corner position estimation system lost tracking and could not recover, while the line position estimation system did not lose track. The ellipse position estimation system was more robust, allowing the system to automatically recover, if tracking was lost, with accuracy up to 4 centimeters. During both approach sequences the ellipse system was the most robust, being able to track the satellite consistently. The corner system could not track the system throughout the approach in real or virtual approaches and the line system could track the satellite during the virtual approach sequence

    TARGET POSE ESTIMATION VIA DEEP LEARNING FOR MILITARY SYSTEMS

    Get PDF
    Target pose estimation and aimpoint selection is crucial in direct energy weapon systems, as it allows the system to point to a specific and strategic area of the target. However, it is a challenging task because a dedicated attitude sensor is required. Motivated by new emerging deep learning capabilities, the present work proposes a deep learning model to estimate a target spacecraft attitude in terms of Euler angles. Data for the deep learning model were experimentally generated from 3D UAV models, incorporating effects such as atmospheric backgrounds and turbulence. The targets pose was derived from the training, validation, and prediction of 2D keypoints. With a keypoint detection model it is possible to detect interest points in an image, which allows us to estimate pose, angles, and dimensions of the target in question. Utilizing a weak-perspective direct linear transformation algorithm, the pose of a 3D object with respect to a camera from 3D to 2D correspondences could be determined. Additionally, from this correspondence, an aimpoint, mimicking laser tracking could be determined on the target. This work evaluates these methods and their accuracy against experimentally generated data with simulated real-world environments.Outstanding ThesisEnsign, United States NavyApproved for public release. Distribution is unlimited

    Robust ego-localization using monocular visual odometry

    Get PDF

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore