15 research outputs found

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method

    MAVIS: Multi-Camera Augmented Visual-Inertial SLAM using SE2(3) Based Exact IMU Pre-integration

    Full text link
    We present a novel optimization-based Visual-Inertial SLAM system designed for multiple partially overlapped camera systems, named MAVIS. Our framework fully exploits the benefits of wide field-of-view from multi-camera systems, and the metric scale measurements provided by an inertial measurement unit (IMU). We introduce an improved IMU pre-integration formulation based on the exponential function of an automorphism of SE_2(3), which can effectively enhance tracking performance under fast rotational motion and extended integration time. Furthermore, we extend conventional front-end tracking and back-end optimization module designed for monocular or stereo setup towards multi-camera systems, and introduce implementation details that contribute to the performance of our system in challenging scenarios. The practical validity of our approach is supported by our experiments on public datasets. Our MAVIS won the first place in all the vision-IMU tracks (single and multi-session SLAM) on Hilti SLAM Challenge 2023 with 1.7 times the score compared to the second place.Comment: video link: https://youtu.be/Q_jZSjhNFf

    Omnidirectional DSO: Direct Sparse Odometry with Fisheye Cameras

    Full text link
    We propose a novel real-time direct monocular visual odometry for omnidirectional cameras. Our method extends direct sparse odometry (DSO) by using the unified omnidirectional model as a projection function, which can be applied to fisheye cameras with a field-of-view (FoV) well above 180 degrees. This formulation allows for using the full area of the input image even with strong distortion, while most existing visual odometry methods can only use a rectified and cropped part of it. Model parameters within an active keyframe window are jointly optimized, including the intrinsic/extrinsic camera parameters, 3D position of points, and affine brightness parameters. Thanks to the wide FoV, image overlap between frames becomes bigger and points are more spatially distributed. Our results demonstrate that our method provides increased accuracy and robustness over state-of-the-art visual odometry algorithms.Comment: Accepted by IEEE Robotics and Automation Letters (RA-L), 2018 and IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    FSD-BRIEF: a distorted BRIEF descriptor for fisheye image based on spherical perspective model

    Get PDF
    Fisheye images with a far larger Field of View (FOV) have severe radial distortion, with the result that the associated image feature matching process cannot achieve the best performance if the traditional feature descriptors are used. To address this challenge, this paper reports a novel distorted Binary Robust Independent Elementary Feature (BRIEF) descriptor for fisheye images based on a spherical perspective model. Firstly, the 3D gray centroid of feature points is designed, and the position and direction of the feature points on the spherical image are described by a constructed feature point attitude matrix. Then, based on the attitude matrix of feature points, the coordinate mapping relationship between the BRIEF descriptor template and the fisheye image is established to realize the computation associated with the distorted BRIEF descriptor. Four experiments are provided to test and verify the invariance and matching performance of the proposed descriptor for a fisheye image. The experimental results show that the proposed descriptor works well for distortion invariance and can significantly improve the matching performance in fisheye images

    Selbstkalibrierung mobiler Multisensorsysteme mittels geometrischer 3D-Merkmale

    Get PDF
    Ein mobiles Multisensorsystem ermöglicht die effiziente, räumliche Erfassung von Objekten und der Umgebung. Die Kalibrierung des mobilen Multisensorsystems ist ein notwendiger Vorverarbeitungsschritt für die Sensordatenfusion und für genaue räumliche Erfassungen. Bei herkömmlichen Verfahren kalibrieren Experten das mobile Multisensorsystem in aufwändigen Prozeduren vor Verwendung durch Aufnahmen eines Kalibrierobjektes mit bekannter Form. Im Gegensatz zu solchen objektbasierten Kalibrierungen ist eine Selbstkalibrierung praktikabler, zeitsparender und bestimmt die gesuchten Parameter mit höherer Aktualität. Diese Arbeit stellt eine neue Methode zur Selbstkalibrierung mobiler Multisensorsysteme vor, die als Merkmalsbasierte Selbstkalibrierung bezeichnet wird. Die Merkmalsbasierte Selbstkalibrierung ist ein datenbasiertes, universelles Verfahren, das für eine beliebige Kombination aus einem Posenbestimmungssensor und einem Tiefensensor geeignet ist. Die fundamentale Annahme der Merkmalsbasierten Selbstkalibrierung ist, dass die gesuchten Parameter am besten bestimmt sind, wenn die erfasste Punktwolke die höchstmögliche Qualität hat. Die Kostenfunktion, die zur Bewertung der Qualität verwendet wird, basiert auf Geometrischen 3D-Merkmalen, die wiederum auf den lokalen Nachbarschaften jedes Punktes basieren. Neben der detaillierten Analyse unterschiedlicher Aspekte der Selbstkalibrierung, wie dem Einfluss der Systemposen auf das Ergebnis, der Eignung verschiedener Geometrischer 3D-Merkmale für die Selbstkalibrierung und dem Konvergenzradius des Verfahrens, wird die Merkmalsbasierte Selbstkalibrierung anhand eines synthethischen und dreier realer Datensätze evaluiert. Diese Datensätze wurden dabei mit unterschiedlichen Sensoren und in unterschiedlichen Umgebungen aufgezeichnet. Die Experimente zeigen die vielseitige Einsetzbarkeit der Merkmalsbasierten Selbstkalibrierung hinsichtlich der Sensoren und der Umgebungen. Die Ergebnisse werden stets mit einer geeigneten objektbasierten Kalibrierung aus der Literatur und einer weiteren, nachimplementierten Selbstkalibrierung verglichen. Verglichen mit diesen Verfahren erzielt die Merkmalsbasierte Selbstkalibrierung bessere oder zumindest vergleichbare Genauigkeiten für alle Datensätze. Die Genauigkeit und Präzision der Merkmalsbasierten Selbstkalibrierung entspricht dem aktuellen Stand der Forschung. Für den Datensatz, der die höchsten Sensorgenauigkeiten aufweist, werden beispielsweise die Parameter der relativen Translation zwischen dem Rigid Body eines Motion Capture Systems und einem Laserscanner mit einer Genauigkeit von ca. 1 cm1\,\mathrm{cm} bestimmt, obwohl die Distanzmessgenauigkeit dieses Laserscanners nur 3 cm3\,\mathrm{cm} beträgt

    Modeling Ozark Caves with Structure-from-Motion Photogrammetry: An Assessment of Stand-Alone Photogrammetry for 3-Dimensional Cave Survey

    Get PDF
    Nearly all aspects of karst science and management begin with a map. Yet despite this fact, cave survey is largely conducted in the same archaic way that is has been for years - with a compass, tape measure, and a sketchpad. Traditional cave survey can establish accurate survey lines quickly. However, passage walls, ledges, profiles, and cross-sections are time intensive and ultimately rely on the sketcher’s experience at interpretively hand drawing these features between survey stations. This project endeavors to experiment with photogrammetry as a method of improving on traditional cave survey, while also avoiding some of the major pitfalls of terrestrial laser scanning. The proposed method allows for the creation of 3D models which capture cave wall geometry, important cave formations, as well as providing the ability to create cross sections anywhere desired. The interactive 3D cave models are produced cheaply, with equipment that can be operated in extremely confined, harsh conditions, by unpaid volunteers with little to no technical training. While the rapid advancement of photogrammetric software has led to its use in many 3D modeling applications, there is only a sparse body of research examining the use of photogrammetry as a standalone method for surveying caves. The proposed methodology uses a GoPro camera and a 1000 lumen portable floodlight to capture still images down the length of cave passages. The procedure goes against several traditional rules of thumb, both operating in the dark with a moving light source, as well as utilizing a wide angle, fish eye lens, to capture scene information that is not perpendicular to the camera\u27s field of view. Images are later processed into 3D models using Agisoft’s PhotoScan. Four caves were modeled using the method, with varying levels of success. The best results occurred in dry confined passages, while passages greater than 9 meters (30ft) in width, or those with a great deal of standing water in the floor, produced large holes. An additional experiment occurred in the University of Arkansas utility tunnel
    corecore