7 research outputs found

    Self-Supervised Depth Correction of Lidar Measurements from Map Consistency Loss

    Full text link
    Depth perception is considered an invaluable source of information in the context of 3D mapping and various robotics applications. However, point cloud maps acquired using consumer-level light detection and ranging sensors (lidars) still suffer from bias related to local surface properties such as measuring beam-to-surface incidence angle, distance, texture, reflectance, or illumination conditions. This fact has recently motivated researchers to exploit traditional filters, as well as the deep learning paradigm, in order to suppress the aforementioned depth sensors error while preserving geometric and map consistency details. Despite the effort, depth correction of lidar measurements is still an open challenge mainly due to the lack of clean 3D data that could be used as ground truth. In this paper, we introduce two novel point cloud map consistency losses, which facilitate self-supervised learning on real data of lidar depth correction models. Specifically, the models exploit multiple point cloud measurements of the same scene from different view-points in order to learn to reduce the bias based on the constructed map consistency signal. Complementary to the removal of the bias from the measurements, we demonstrate that the depth correction models help to reduce localization drift. Additionally, we release a data set that contains point cloud data captured in an indoor corridor environment with precise localization and ground truth mapping information.Comment: Accepted to RA-L 2023: https://www.ieee-ras.org/publications/ra-

    MonoForce: Self-supervised learning of physics-aware grey-box model for predicting the robot-terrain interaction

    Full text link
    We introduce an explainable, physics-aware, and end-to-end differentiable model which predicts the outcome of robot-terrain interaction from camera images. The proposed MonoForce model consists of a black-box module, which predicts robot-terrain interaction forces from the onboard camera, followed by a white-box module, which transforms these forces through the laws of classical mechanics into the predicted trajectories. As the white-box model is implemented as a differentiable ODE solver, it enables measuring the physical consistency between predicted forces and ground-truth trajectories of the robot. Consequently, it creates a self-supervised loss similar to MonoDepth. To facilitate the reproducibility of the paper, we provide the source code. See the project github for codes and supplementary materials such as videos and data sequences

    SwarmCloak: Landing of a Swarm of Nano-Quadrotors on Human Arms

    Full text link
    We propose a novel system SwarmCloak for landing of a fleet of four flying robots on the human arms using light-sensitive landing pads with vibrotactile feedback. We developed two types of wearable tactile displays with vibromotors which are activated by the light emitted from the LED array at the bottom of quadcopters. In a user study, participants were asked to adjust the position of the arms to land up to two drones, having only visual feedback, only tactile feedback or visual-tactile feedback. The experiment revealed that when the number of drones increases, tactile feedback plays a more important role in accurate landing and operator's convenience. An important finding is that the best landing performance is achieved with the combination of tactile and visual feedback. The proposed technology could have a strong impact on the human-swarm interaction, providing a new level of intuitiveness and engagement into the swarm deployment just right from the skin surface.Comment: ACM Siggraph Asia 2019 conference (Emerging Technologies section). Best Demo Award by committee member

    Tactile Interaction of Human with Swarm of Nano-Quadrotors augmented with Adaptive Obstacle Avoidance

    No full text
    International audienceThis paper presents a human-robot interaction strategy to solve multiple agents path planning problem when a human operator guides a formation of quadrotors with impedance control and receives vibrotactile feedback. The proposed approach provides a solution based on a leader-followers architecture with a prescribed formation geometry that adapts dynamically to the environment and the operator. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors. The path generated by a human operator and impedance models is corrected with potential fields method that ensures robots trajectories to be collision-free, reshaping the geometry of the formation when required by environmental conditions (e.g. narrow passages). The tactile patterns representing the changing dynamics of the swarm are proposed. The user feels the state of the swarm at his fingertips and receives valuable information to improve the controllability of the complex formation. The proposed technology can potentially have a strong impact on the human-swarm interaction, providing a new level of intuitiveness and immersion into the swarm navigation
    corecore