1,358 research outputs found

    A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots

    Full text link
    Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic Computing (IEEE IRC 2019)

    Accurate and Interactive Visual-Inertial Sensor Calibration with Next-Best-View and Next-Best-Trajectory Suggestion

    Full text link
    Visual-Inertial (VI) sensors are popular in robotics, self-driving vehicles, and augmented and virtual reality applications. In order to use them for any computer vision or state-estimation task, a good calibration is essential. However, collecting informative calibration data in order to render the calibration parameters observable is not trivial for a non-expert. In this work, we introduce a novel VI calibration pipeline that guides a non-expert with the use of a graphical user interface and information theory in collecting informative calibration data with Next-Best-View and Next-Best-Trajectory suggestions to calibrate the intrinsics, extrinsics, and temporal misalignment of a VI sensor. We show through experiments that our method is faster, more accurate, and more consistent than state-of-the-art alternatives. Specifically, we show how calibrations with our proposed method achieve higher accuracy estimation results when used by state-of-the-art VI Odometry as well as VI-SLAM approaches. The source code of our software can be found on: https://github.com/chutsu/yac.Comment: 8 pages, 11 figures, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    UcoSLAM: Simultaneous Localization and Mapping by Fusion of KeyPoints and Squared Planar Markers

    Full text link
    This paper proposes a novel approach for Simultaneous Localization and Mapping by fusing natural and artificial landmarks. Most of the SLAM approaches use natural landmarks (such as keypoints). However, they are unstable over time, repetitive in many cases or insufficient for a robust tracking (e.g. in indoor buildings). On the other hand, other approaches have employed artificial landmarks (such as squared fiducial markers) placed in the environment to help tracking and relocalization. We propose a method that integrates both approaches in order to achieve long-term robust tracking in many scenarios. Our method has been compared to the start-of-the-art methods ORB-SLAM2 and LDSO in the public dataset Kitti, Euroc-MAV, TUM and SPM, obtaining better precision, robustness and speed. Our tests also show that the combination of markers and keypoints achieves better accuracy than each one of them independently.Comment: Paper submitted to Pattern Recognitio

    Self-Describing Fiducials for GPS-Denied Navigation of Unmanned Aerial Vehicles

    Get PDF
    Accurate estimation of an Unmanned Aerial Vehicle’s (UAV’s) location is critical for the operation of the UAV when it is controlled completely by its onboard processor. This can be particularly challenging in environments in which GPS is not available (GPS-denied). Many of the options previously explored for estimation of a UAV’s location without the use of GPS require more sophisticated processors than can feasibly be mounted on a UAV because of weight, size, and power restrictions. Many options are also aimed at indoor operation without the range capabilities to scale to outdoor operations. This research explores an alternative method of GPS-denied navigation which utilizes line-of-sight measurements to self-describing fiducials to aid in position determination. Each self-describing fiducial is an easily identifiable object fixed at a specific location. Each fiducial relays data containing its specific location to the observing UAV. The UAV can measure its relative position to the fiducial using camera images. This measurement can be combined with measurements from an Inertial Measurement Unit (IMU) to obtain a more accurate estimate of the UAV’s location. In this research, a simulation is used to validate and assess the performance of algorithms used to estimate the UAV’s position using these measurements. This research analyzes the effectiveness of the estimation algorithm when used with various IMUs and fiducial spacings. The effect of how quickly camera images of fiducials can be captured and processed is also analyzed. Preparations for demonstrating this system with hardware are then presented and discussed, including options for fiducial type and a way to measure the true position of the UAV. The results from the simulated scenarios and the hardware demonstration preparation are analyzed, and future work is discussed
    • …
    corecore