815 research outputs found

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201

    A Non-Rigid Map Fusion-Based RGB-Depth SLAM Method for Endoscopic Capsule Robots

    Full text link
    In the gastrointestinal (GI) tract endoscopy field, ingestible wireless capsule endoscopy is considered as a minimally invasive novel diagnostic technology to inspect the entire GI tract and to diagnose various diseases and pathologies. Since the development of this technology, medical device companies and many groups have made significant progress to turn such passive capsule endoscopes into robotic active capsule endoscopes to achieve almost all functions of current active flexible endoscopes. However, the use of robotic capsule endoscopy still has some challenges. One such challenge is the precise localization of such active devices in 3D world, which is essential for a precise three-dimensional (3D) mapping of the inner organ. A reliable 3D map of the explored inner organ could assist the doctors to make more intuitive and correct diagnosis. In this paper, we propose to our knowledge for the first time in literature a visual simultaneous localization and mapping (SLAM) method specifically developed for endoscopic capsule robots. The proposed RGB-Depth SLAM method is capable of capturing comprehensive dense globally consistent surfel-based maps of the inner organs explored by an endoscopic capsule robot in real time. This is achieved by using dense frame-to-model camera tracking and windowed surfelbased fusion coupled with frequent model refinement through non-rigid surface deformations

    Learned Semantic Multi-Sensor Depth Map Fusion

    Full text link
    Volumetric depth map fusion based on truncated signed distance functions has become a standard method and is used in many 3D reconstruction pipelines. In this paper, we are generalizing this classic method in multiple ways: 1) Semantics: Semantic information enriches the scene representation and is incorporated into the fusion process. 2) Multi-Sensor: Depth information can originate from different sensors or algorithms with very different noise and outlier statistics which are considered during data fusion. 3) Scene denoising and completion: Sensors can fail to recover depth for certain materials and light conditions, or data is missing due to occlusions. Our method denoises the geometry, closes holes and computes a watertight surface for every semantic class. 4) Learning: We propose a neural network reconstruction method that unifies all these properties within a single powerful framework. Our method learns sensor or algorithm properties jointly with semantic depth fusion and scene completion and can also be used as an expert system, e.g. to unify the strengths of various photometric stereo algorithms. Our approach is the first to unify all these properties. Experimental evaluations on both synthetic and real data sets demonstrate clear improvements.Comment: 11 pages, 7 figures, 2 tables, accepted for the 2nd Workshop on 3D Reconstruction in the Wild (3DRW2019) in conjunction with ICCV201

    IR-UWB Detection and Fusion Strategies using Multiple Detector Types

    Full text link
    Optimal detection of ultra wideband (UWB) pulses in a UWB transceiver employing multiple detector types is proposed and analyzed in this paper. We propose several fusion techniques for fusing decisions made by individual IR-UWB detectors. We assess the performance of these fusion techniques for commonly used detector types like matched filter, energy detector and amplitude detector. In order to perform this, we derive the detection performance equation for each of the detectors in terms of false alarm rate, shape of the pulse and number of UWB pulses used in the detection and apply these in the fusion algorithms. We show that the performance can be improved approximately by 4 dB in terms of signal to noise ratio (SNR) for perfect detectability of a UWB signal in a practical scenario by fusing the decisions from individual detectors.Comment: Accepted for publishing in IEEE WCNC 201

    Cooperative mmWave PHD-SLAM with Moving Scatterers

    Full text link
    Using the multiple-model~(MM) probability hypothesis density~(PHD) filter, millimeter wave~(mmWave) radio simultaneous localization and mapping~(SLAM) in vehicular scenarios is susceptible to movements of objects, in particular vehicles driving in parallel with the ego vehicle. We propose and evaluate two countermeasures to track vehicle scatterers~(VSs) in mmWave radio MM-PHD-SLAM. First, locally at each vehicle, we generate and treat the VS map PHD in the context of Bayesian recursion, and modify vehicle state correction with the VS map PHD. Second, in the global map fusion process at the base station, we average the VS map PHD and upload it with self-vehicle posterior density, compute fusion weights, and prune the target with low Gaussian weight in the context of arithmetic average-based map fusion. From simulation results, the proposed cooperative mmWave radio MM-PHD-SLAM filter is shown to outperform the previous filter in VS scenarios
    • …
    corecore