167 research outputs found

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    Virtual sensors for human concepts—Building detection by an outdoor mobile robot

    Get PDF
    In human–robot communication it is often important to relate robot sensor readings to concepts used by humans. We suggest the use of a virtual sensor (one or several physical sensors with a dedicated signal processing unit for the recognition of real world concepts) and a method with which the virtual sensor can learn from a set of generic features. The virtual sensor robustly establishes the link between sensor data and a particular human concept. In this work, we present a virtual sensor for building detection that uses vision and machine learning to classify the image content in a particular direction as representing buildings or non-buildings. The virtual sensor is trained on a diverse set of image data, using features extracted from grey level images. The features are based on edge orientation, the configurations of these edges, and on grey level clustering. To combine these features, the AdaBoost algorithm is applied. Our experiments with an outdoor mobile robot show that the method is able to separate buildings from nature with a high classification rate, and to extrapolate well to images collected under different conditions. Finally, the virtual sensor is applied on the mobile robot, combining its classifications of sub-images from a panoramic view with spatial information (in the form of location and orientation of the robot) in order to communicate the likely locations of buildings to a remote human operator

    Lidar-level localization with radar? The CFEAR approach to accurate, fast and robust large-scale radar odometry in diverse environments

    Full text link
    This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.Comment: Accepted for publication in Transactions on Robotics. Edited 2022-11-07: Updated affiliation and citatio

    A Data-Efficient Approach for Long-Term Human Motion Prediction Using Maps of Dynamics

    Full text link
    Human motion prediction is essential for the safe and smooth operation of mobile service robots and intelligent vehicles around people. Commonly used neural network-based approaches often require large amounts of complete trajectories to represent motion dynamics in complex semantically-rich spaces. This requirement may complicate deployment of physical systems in new environments, especially when the data is being collected online from onboard sensors. In this paper we explore a data-efficient alternative using maps of dynamics (MoD) to represent place-dependent multi-modal spatial motion patterns, learned from prior observations. Our approach can perform efficient human motion prediction in the long-term perspective of up to 60 seconds. We quantitatively evaluate its accuracy with limited amount of training data in comparison to an LSTM-based baseline, and qualitatively show that the predicted trajectories reflect the natural semantic properties of the environment, e.g. the locations of short- and long-term goals, navigation in narrow passages, around obstacles, etc.Comment: in 5th LHMP Workshop held in conjunction with 40th IEEE International Conference on Robotics and Automation (ICRA), 29/05 - 02/06 2023, Londo

    CLiFF-LHMP: Using Spatial Dynamics Patterns for Long-Term Human Motion Prediction

    Full text link
    Human motion prediction is important for mobile service robots and intelligent vehicles to operate safely and smoothly around people. The more accurate predictions are, particularly over extended periods of time, the better a system can, e.g., assess collision risks and plan ahead. In this paper, we propose to exploit maps of dynamics (MoDs, a class of general representations of place-dependent spatial motion patterns, learned from prior observations) for long-term human motion prediction (LHMP). We present a new MoD-informed human motion prediction approach, named CLiFF-LHMP, which is data efficient, explainable, and insensitive to errors from an upstream tracking system. Our approach uses CLiFF-map, a specific MoD trained with human motion data recorded in the same environment. We bias a constant velocity prediction with samples from the CLiFF-map to generate multi-modal trajectory predictions. In two public datasets we show that this algorithm outperforms the state of the art for predictions over very extended periods of time, achieving 45% more accurate prediction performance at 50s compared to the baseline.Comment: Accepted to the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Observations of Diffuse Ultraviolet Emission from Draco

    Full text link
    We have studied small scale (2 arcmin) spatial variation of the diffuse UV radiation using a set of 11 GALEX deep observations in the constellation of Draco. We find a good correlation between the observed UV background and the IR 100 micron flux, indicating that the dominant contributor of the diffuse background in the field is the scattered starlight from the interstellar dust grains. We also find strong evidence of additional emission in the FUV band which is absent in the NUV band. This is most likely due to Lyman band emission from molecular hydrogen in a ridge of dust running through the field and to line emissions from species such as C IV (1550 A) and Si II (1533 A) in the rest of the field. A strong correlation exists between the FUV/NUV ratio and the FUV intensity in the excess emission regions in the FUV band irrespective of the optical depth of the region. The optical depth increases more rapidly in the UV than the IR and we find that the UV/IR ratio drops off exponentially with increasing IR due to saturation effects in the UV. Using the positional details of Spitzer extragalactic objects, we find that the contribution of extragalactic light in the diffuse NUV background is 49 +/- 13 photon units and is 30 +/- 10 photon units in the FUV band.Comment: 30 pages, 13 figures, Accepted for publication in The Astrophysical Journal (ApJ), November 2010, v723 issu

    Survey of maps of dynamics for mobile robots

    Get PDF
    Robotic mapping provides spatial information for autonomous agents. Depending on the tasks they seek to enable, the maps created range from simple 2D representations of the environment geometry to complex, multilayered semantic maps. This survey article is about maps of dynamics (MoDs), which store semantic information about typical motion patterns in a given environment. Some MoDs use trajectories as input, and some can be built from short, disconnected observations of motion. Robots can use MoDs, for example, for global motion planning, improved localization, or human motion prediction. Accounting for the increasing importance of maps of dynamics, we present a comprehensive survey that organizes the knowledge accumulated in the field and identifies promising directions for future work. Specifically, we introduce field-specific vocabulary, summarize existing work according to a novel taxonomy, and describe possible applications and open research problems. We conclude that the field is mature enough, and we expect that maps of dynamics will be increasingly used to improve robot performance in real-world use cases. At the same time, the field is still in a phase of rapid development where novel contributions could significantly impact this research area

    Natural criteria for comparison of pedestrian flow forecasting models

    Get PDF
    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-theart pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times

    The ILIAD Safety Stack: Human-Aware Infrastructure-Free Navigation of Industrial Mobile Robots

    Get PDF
    Safe yet efficient operation of professional service robots within logistics or production in human-robot shared environments requires a flexible human-aware navigation stack. In this manuscript, we propose the ILIAD safety stack comprising software and hardware designed to achieve safe and efficient motion specifically for industrial vehicles with nontrivial kinematics The stack integrates five interconnected layers for autonomous motion planning and control to enable short- and long-term reasoning. The use-case scenario tested requires an autonomous industrial forklift to safely navigate among pick-and-place locations during normal daily activities involving human workers. Our test-bed in the real world consists of a three-day experiment in a food distribution warehouse. The evaluation is extended in simulation with an ablation study of the impact of different layers to show both the practical and the performance-related impact. The experimental results show a safer and more legible robot when humans are nearby with a trade-off in task efficiency, and that not all layers have the same degree of impact in the system
    corecore