1,549 research outputs found

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    Lazy localization using the Frozen-Time Smoother

    Get PDF
    We present a new algorithm for solving the global localization problem called Frozen-Time Smoother (FTS). Time is 'frozen', in the sense that the belief always refers to the same time instant, instead of following a moving target, like Monte Carlo Localization does. This algorithm works in the case in which global localization is formulated as a smoothing problem, and a precise estimate of the incremental motion of the robot is usually available. These assumptions correspond to the case when global localization is used to solve the loop closing problem in SLAM. We compare FTS to two Monte Carlo methods designed with the same assumptions. The experiments suggest that a naive implementation of the FTS is more efficient than an extremely optimized equivalent Monte Carlo solution. Moreover, the FTS has an intrinsic laziness: it does not need frequent updates (scans can be integrated once every many meters) and it can process data in arbitrary order. The source code and datasets are available for download

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Development of Collaborative SLAM Algorithm for Team of Robots

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is a fundamental problem for building truly automatic robots. Varieties of methods and algorithms have been generated, and applied into mobile robots during the last thirty years. However, each algorithm has its strength and weakness. This thesis studies the most recent published techniques in the field of mobile robot SLAM. Specifically, it focuses on investigating robot path and landmark position estimating errors made by different methods. The Hybrid method, which uses FastSLAM method as front-end and uses EKF-SLAM method as back-end, combines both methods advantages, producing smaller errors on estimating robot pose. The Hybrid method solves the single robot SLAM problems by summing the weighted mean values of each particle in FastSLAM. The contributions of this thesis is it presents an alternate mapping algorithm that extends this single-robot Hybrid SLAM algorithm to a multi-robot SLAM algorithm. In this algorithm, each robot draws map of the environment separately, and robots could transfer their mapping information into a central computer. The central computer could merge the landmark positions from different robots. At last, a revised landmark position as well as its covariance will be calculated. Landmark positions are fused together according to two robots feature information by using Kalman Filters
    corecore