57 research outputs found

    Self-supervised Monocular Road Detection in Desert Terrain

    Full text link

    On Collaborative Aerial and Surface Robots for Environmental Monitoring of Water Bodies

    Get PDF
    Part 8: Robotics and ManufacturingInternational audienceRemote monitoring is an essential task to help maintaining Earth ecosystems. A notorious example is the monitoring of riverine environments. The solution purposed in this paper is to use an electric boat (ASV - Autonomous Surface Vehicle) operating in symbiosis with a quadrotor (UAV ā€“ Unmanned Air Vehicle). We present the architecture and solutions adopted and at the same time compare it with other examples of collaborative robotics systems, in what we expected could be used as a survey for other persons doing collaborative robotics systems. The architecture here purposed will exploit the symbiotic partnership between both robots by covering the perception, navigation, coordination, and integration aspects

    Visual road following using intrinsic images

    Get PDF
    We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad Traversability Models

    Full text link
    Autonomous off-road driving requires understanding traversability, which refers to the suitability of a given terrain to drive over. When offroad vehicles travel at high speed (>10m/s>10m/s), they need to reason at long-range (50m50m-100m100m) for safe and deliberate navigation. Moreover, vehicles often operate in new environments and under different weather conditions. LiDAR provides accurate estimates robust to visual appearances, however, it is often too noisy beyond 30m for fine-grained estimates due to sparse measurements. Conversely, visual-based models give dense predictions at further distances but perform poorly at all ranges when out of training distribution. To address these challenges, we present ALTER, an offroad perception module that adapts-on-the-drive to combine the best of both sensors. Our visual model continuously learns from new near-range LiDAR measurements. This self-supervised approach enables accurate long-range traversability prediction in novel environments without hand-labeling. Results on two distinct real-world offroad environments show up to 52.5% improvement in traversability estimation over LiDAR-only estimates and 38.1% improvement over non-adaptive visual baseline.Comment: 8 page
    • ā€¦
    corecore