1 research outputs found
Restricted Deformable Convolution based Road Scene Semantic Segmentation Using Surround View Cameras
Understanding the surrounding environment of the vehicle is still one of the
challenges for autonomous driving. This paper addresses 360-degree road scene
semantic segmentation using surround view cameras, which are widely equipped in
existing production cars. First, in order to address large distortion problem
in the fisheye images, Restricted Deformable Convolution (RDC) is proposed for
semantic segmentation, which can effectively model geometric transformations by
learning the shapes of convolutional filters conditioned on the input feature
map. Second, in order to obtain a large-scale training set of surround view
images, a novel method called zoom augmentation is proposed to transform
conventional images to fisheye images. Finally, an RDC based semantic
segmentation model is built; the model is trained for real-world surround view
images through a multi-task learning architecture by combining real-world
images with transformed images. Experiments demonstrate the effectiveness of
the RDC to handle images with large distortions, and that the proposed approach
shows a good performance using surround view cameras with the help of the
transformed images