3,387 research outputs found

    Visual-Inertial Mapping with Non-Linear Factor Recovery

    Full text link
    Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201

    Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast

    Get PDF
    Ultrasound Tomography has seen a revival of interest in the past decade, especially for breast imaging, due to improvements in both ultrasound and computing hardware. In particular, three-dimensional ultrasound tomography, a fully tomographic method in which the medium to be imaged is surrounded by ultrasound transducers, has become feasible. In this paper, a comprehensive derivation and study of a robust framework for large-scale bent-ray ultrasound tomography in 3D for a hemispherical detector array is presented. Two ray-tracing approaches are derived and compared. More significantly, the problem of linking the rays between emitters and receivers, which is challenging in 3D due to the high number of degrees of freedom for the trajectory of rays, is analysed both as a minimisation and as a root-finding problem. The ray-linking problem is parameterised for a convex detection surface and three robust, accurate, and efficient ray-linking algorithms are formulated and demonstrated. To stabilise these methods, novel adaptive-smoothing approaches are proposed that control the conditioning of the update matrices to ensure accurate linking. The nonlinear UST problem of estimating the sound speed was recast as a series of linearised subproblems, each solved using the above algorithms and within a steepest descent scheme. The whole imaging algorithm was demonstrated to be robust and accurate on realistic data simulated using a full-wave acoustic model and an anatomical breast phantom, and incorporating the errors due to time-of-flight picking that would be present with measured data. This method can used to provide a low-artefact, quantitatively accurate, 3D sound speed maps. In addition to being useful in their own right, such 3D sound speed maps can be used to initialise full-wave inversion methods, or as an input to photoacoustic tomography reconstructions
    corecore