897 research outputs found

    CNN for Very Fast Ground Segmentation in Velodyne LiDAR Data

    Full text link
    This paper presents a novel method for ground segmentation in Velodyne point clouds. We propose an encoding of sparse 3D data from the Velodyne sensor suitable for training a convolutional neural network (CNN). This general purpose approach is used for segmentation of the sparse point cloud into ground and non-ground points. The LiDAR data are represented as a multi-channel 2D signal where the horizontal axis corresponds to the rotation angle and the vertical axis the indexes channels (i.e. laser beams). Multiple topologies of relatively shallow CNNs (i.e. 3-5 convolutional layers) are trained and evaluated using a manually annotated dataset we prepared. The results show significant improvement of performance over the state-of-the-art method by Zhang et al. in terms of speed and also minor improvements in terms of accuracy.Comment: ICRA 2018 submissio

    Geometry-Based Next Frame Prediction from Monocular Video

    Full text link
    We consider the problem of next frame prediction from video input. A recurrent convolutional neural network is trained to predict depth from monocular video input, which, along with the current video image and the camera trajectory, can then be used to compute the next frame. Unlike prior next-frame prediction approaches, we take advantage of the scene geometry and use the predicted depth for generating the next frame prediction. Our approach can produce rich next frame predictions which include depth information attached to each pixel. Another novel aspect of our approach is that it predicts depth from a sequence of images (e.g. in a video), rather than from a single still image. We evaluate the proposed approach on the KITTI dataset, a standard dataset for benchmarking tasks relevant to autonomous driving. The proposed method produces results which are visually and numerically superior to existing methods that directly predict the next frame. We show that the accuracy of depth prediction improves as more prior frames are considered.Comment: To appear in 2017 IEEE Intelligent Vehicles Symposiu

    Instance Segmentation and Object Detection in Road Scenes using Inverse Perspective Mapping of 3D Point Clouds and 2D Images

    Get PDF
    The instance segmentation and object detection are important tasks in smart car applications. Recently, a variety of neural network-based approaches have been proposed. One of the challenges is that there are various scales of objects in a scene, and it requires the neural network to have a large receptive field to deal with the scale variations. In other words, the neural network must have deep architectures which slow down computation. In smart car applications, the accuracy of detection and segmentation of vehicle and pedestrian is hugely critical. Besides, 2D images do not have distance information but enough visual appearance. On the other hand, 3D point clouds have strong evidence of existence of objects. The fusion of 2D images and 3D point clouds can provide more information to seek out objects in a scene. This paper proposes a series of fronto-parallel virtual planes and inverse perspective mapping of an input image to the planes, to deal with scale variations. I use 3D point clouds obtained from LiDAR sensor and 2D images obtained from stereo cameras on top of a vehicle to estimate the ground area of the scene and to define virtual planes. Certain height from the ground area in 2D images is cropped to focus on objects on flat roads. Then, the point cloud is used to filter out false-alarms among the over-detection results generated by an off-the-shelf deep neural network, Mask RCNN. The experimental result showed that the proposed approach outperforms Mask RCNN without pre-processing on a benchmark dataset, KITTI dataset [9]
    • …
    corecore