849 research outputs found

    ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild

    Full text link
    Estimating the pose of a moving camera from monocular video is a challenging problem, especially due to the presence of moving objects in dynamic environments, where the performance of existing camera pose estimation methods are susceptible to pixels that are not geometrically consistent. To tackle this challenge, we present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence initialized from pairwise optical flow. Our key idea is to optimize long-range video correspondence as dense point trajectories and use it to learn robust estimation of motion segmentation. A novel neural network architecture is proposed for processing irregular point trajectory data. Camera poses are then estimated and optimized with global bundle adjustment over the portion of long-range point trajectories that are classified as static. Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories compared to existing state-of-the-art methods. In addition, our method is able to retain reasonable accuracy of camera poses on fully static scenes, which consistently outperforms strong state-of-the-art dense correspondence based methods with end-to-end deep learning, demonstrating the potential of dense indirect methods based on optical flow and point trajectories. As the point trajectory representation is general, we further present results and comparisons on in-the-wild monocular videos with complex motion of dynamic objects. Code is available at https://github.com/bytedance/particle-sfm.Comment: ECCV 2022. Project page: http://b1ueber2y.me/projects/ParticleSfM

    A Robust Approach for Monocular Visual Odometry in Underwater Environments

    Get PDF
    This work presents a visual odometric system for camera tracking in underwater scenarios of the seafloor which are strongly perturbed with sunlight caustics and cloudy water. Particularly, we focuse on the performance and robustnes of the system, which structurally associates a deflickering filter with a visual tracker. Two state-of-the-art trackers are employed for our study, one pixel-oriented and the other feature-based. The contrivances of the trackers were crumbled and their suitability for underwater environments analyzed comparatively. To this end real subaquatic footages in perturbed environments were employed.Sociedad Argentina de Informática e Investigación Operativ

    Edge Based RGB-D SLAM and SLAM Based Navigation

    Get PDF

    Efficient Factor Graph Fusion for Multi-robot Mapping

    Get PDF
    This work presents a novel method to efficiently factorize the combination of multiple factor graphs having common variables of estimation. The fast-paced innovation in the algebraic graph theory has enabled new tools of state estimation like factor graphs. Recent factor graph formulation for Simultaneous Localization and Mapping (SLAM) like Incremental Smoothing and Mapping using the Bayes tree (ISAM2) has been very successful and garnered much attention. Variable ordering, a well-known technique in linear algebra is employed for solving the factor graph. Our primary contribution in this work is to reuse the variable ordering of the graphs being combined to find the ordering of the fused graph. In the case of mapping, multiple robots provide a great advantage over single robot by providing a faster map coverage and better estimation quality. This coupled with an inevitable increase in the number of robots around us produce a demand for faster algorithms. For example, a city full of self-driving cars could pool their observation measurements rapidly to plan a traffic free navigation. By reusing the variable ordering of the parent graphs we were able to produce an order-of-magnitude difference in the time required for solving the fused graph. We also provide a formal verification to show that the proposed strategy does not violate any of the relevant standards. A common problem in multi-robot SLAM is relative pose graph initialization to produce a globally consistent map. The other contribution addresses this by minimizing a specially formulated error function as a part of solving the factor graph. The performance is illustrated on a publicly available SuiteSparse dataset and the multi-robot AP Hill dataset

    Robust Visual SLAM in Challenging Environments with Low-texture and Dynamic Illumination

    Get PDF
    - Robustness to Dynamic Illumination conditions is also one of the main open challenges in visual odometry and SLAM, e.g. high dynamic range (HDR) environments. The main difficulties in these situations come from both the limitations of the sensors, for instance automatic settings of a camera might not react fast enough to properly record dynamic illumination changes, and also from limitations in the algorithms, e.g. the track of interest points is typically based on brightness constancy. The work of this thesis contributes to mitigate these phenomena from two different perspectives. The first one addresses this problem from a deep learning perspective by enhancing images to invariant and richer representations for VO and SLAM, benefiting from the generalization properties of deep neural networks. In this work it is also demonstrated how the insertion of long short term memory (LSTM) allows us to obtain temporally consistent sequences, since the estimation depends on previous states. Secondly, a more traditional perspective is exploited to contribute with a purely geometric-based tracking of line segments in challenging stereo streams with complex or varying illumination, since they are intrinsically more informative. Fecha de lectura de Tesis Doctoral: 26 de febrero 2020In the last years, visual Simultaneous Localization and Mapping (SLAM) has played a role of capital importance in rapid technological advances, e.g. mo- bile robotics and applications such as virtual, augmented, or mixed reality (VR/AR/MR), as a vital part of their processing pipelines. As its name indicates, it comprises the estimation of the state of a robot (typically the pose) while, simultaneously, incrementally building and refining a consistent representation of the environment, i.e. the so-called map, based on the equipped sensors. Despite the maturity reached by state-of-art visual SLAM techniques in controlled environments, there are still many open challenges to address be- fore reaching a SLAM system robust to long-term operations in uncontrolled scenarios, where classical assumptions, such as static environments, do not hold anymore. This thesis contributes to improve robustness of visual SLAM in harsh or difficult environments, in particular: - Low-textured Environments, where traditional approaches suffer from an accuracy impoverishment and, occasionally, the absolute failure of the system. Fortunately, many of such low-textured environments contain planar elements that are rich in linear shapes, so an alternative feature choice such as line segments would exploit information from structured parts of the scene. This set of contributions exploits both type of features, i.e. points and line segments, to produce visual odometry and SLAM algorithms robust in a broader variety of environments, hence leveraging them at all instances of the related processes: monocular depth estimation, visual odometry, keyframe selection, bundle adjustment, loop closing, etc. Additionally, an open-source C++ implementation of the proposed algorithms has been released along with the published articles and some extra multimedia material for the benefit of the community

    Enhancing 3D Autonomous Navigation Through Obstacle Fields: Homogeneous Localisation and Mapping, with Obstacle-Aware Trajectory Optimisation

    Get PDF
    Small flying robots have numerous potential applications, from quadrotors for search and rescue, infrastructure inspection and package delivery to free-flying satellites for assistance activities inside a space station. To enable these applications, a key challenge is autonomous navigation in 3D, near obstacles on a power, mass and computation constrained platform. This challenge requires a robot to perform localisation, mapping, dynamics-aware trajectory planning and control. The current state-of-the-art uses separate algorithms for each component. Here, the aim is for a more homogeneous approach in the search for improved efficiencies and capabilities. First, an algorithm is described to perform Simultaneous Localisation And Mapping (SLAM) with physical, 3D map representation that can also be used to represent obstacles for trajectory planning: Non-Uniform Rational B-Spline (NURBS) surfaces. Termed NURBSLAM, this algorithm is shown to combine the typically separate tasks of localisation and obstacle mapping. Second, a trajectory optimisation algorithm is presented that produces dynamically-optimal trajectories with direct consideration of obstacles, providing a middle ground between path planners and trajectory smoothers. Called the Admissible Subspace TRajectory Optimiser (ASTRO), the algorithm can produce trajectories that are easier to track than the state-of-the-art for flight near obstacles, as shown in flight tests with quadrotors. For quadrotors to track trajectories, a critical component is the differential flatness transformation that links position and attitude controllers. Existing singularities in this transformation are analysed, solutions are proposed and are then demonstrated in flight tests. Finally, a combined system of NURBSLAM and ASTRO are brought together and tested against the state-of-the-art in a novel simulation environment to prove the concept that a single 3D representation can be used for localisation, mapping, and planning
    • …
    corecore