1,643 research outputs found

    Mapless Online Detection of Dynamic Objects in 3D Lidar

    Full text link
    This paper presents a model-free, setting-independent method for online detection of dynamic objects in 3D lidar data. We explicitly compensate for the moving-while-scanning operation (motion distortion) of present-day 3D spinning lidar sensors. Our detection method uses a motion-compensated freespace querying algorithm and classifies between dynamic (currently moving) and static (currently stationary) labels at the point level. For a quantitative analysis, we establish a benchmark with motion-distorted lidar data using CARLA, an open-source simulator for autonomous driving research. We also provide a qualitative analysis with real data using a Velodyne HDL-64E in driving scenarios. Compared to existing 3D lidar methods that are model-free, our method is unique because of its setting independence and compensation for pointcloud motion distortion.Comment: 7 pages, 8 figure

    Learning a Bias Correction for Lidar-only Motion Estimation

    Full text link
    This paper presents a novel technique to correct for bias in a classical estimator using a learning approach. We apply a learned bias correction to a lidar-only motion estimation pipeline. Our technique trains a Gaussian process (GP) regression model using data with ground truth. The inputs to the model are high-level features derived from the geometry of the point-clouds, and the outputs are the predicted biases between poses computed by the estimator and the ground truth. The predicted biases are applied as a correction to the poses computed by the estimator. Our technique is evaluated on over 50km of lidar data, which includes the KITTI odometry benchmark and lidar datasets collected around the University of Toronto campus. After applying the learned bias correction, we obtained significant improvements to lidar odometry in all datasets tested. We achieved around 10% reduction in errors on all datasets from an already accurate lidar odometry algorithm, at the expense of only less than 1% increase in computational cost at run-time.Comment: 15th Conference on Computer and Robot Vision (CRV 2018

    Economic analysis of business models with multiple potential value streams: Application to the biochar system

    Get PDF
    Please click Additional Files below to see the full abstract

    Using quantitative analysis to assess the appropriateness of infill buildings in historic settings

    Get PDF
    Over the past 40 years or so, and more recently in developing countries, increasing attention has been paid to the preservation of historic settings; however, with continued development and urbanization, a solution is needed for the problem of how to adapt historic settings for contemporary life. Consideration of how to conserve historic settings while introducing new development has been the subject of theoretical study for many years, and despite many mistakes, excellent architectural projects have been completed. However, most research has focused on assessing such projects only at a qualitative and cognitive level; a deeper exploration is therefore needed. Thus, the main goal of this paper is to apply a scientific, quantitative approach to investigating the contextual fit of infill buildings in historic settings. This research is approached mathematically within the framework of architectural theory and visual science. To assess the potential of this methodology, a case-study building facade is analyzed using three attributes: size, proportion, and color. The findings of this research can help in evaluating the contextual fit of architectural designs and thereby lead to improved design guidance for historic settings

    Point-based metric and topological localisation between lidar and overhead imagery

    Get PDF
    In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied

    RSL-Net: Localising in Satellite Images From a Radar on the Ground

    Full text link
    This paper is about localising a vehicle in an overhead image using FMCW radar mounted on a ground vehicle. FMCW radar offers extraordinary promise and efficacy for vehicle localisation. It is impervious to all weather types and lighting conditions. However the complexity of the interactions between millimetre radar wave and the physical environment makes it a challenging domain. Infrastructure-free large-scale radar-based localisation is in its infancy. Typically here a map is built and suitable techniques, compatible with the nature of sensor, are brought to bear. In this work we eschew the need for a radar-based map; instead we simply use an overhead image -- a resource readily available everywhere. This paper introduces a method that not only naturally deals with the complexity of the signal type but does so in the context of cross modal processing.Comment: Accepted to IEEE Robotics and Automation Letters (RA-L
    corecore