Pedestrian detection is one of the most explored topics in computer vision
and robotics. The use of deep learning methods allowed the development of new
and highly competitive algorithms. Deep Reinforcement Learning has proved to be
within the state-of-the-art in terms of both detection in perspective cameras
and robotics applications. However, for detection in omnidirectional cameras,
the literature is still scarce, mostly because of their high levels of
distortion. This paper presents a novel and efficient technique for robust
pedestrian detection in omnidirectional images. The proposed method uses deep
Reinforcement Learning that takes advantage of the distortion in the image. By
considering the 3D bounding boxes and their distorted projections into the
image, our method is able to provide the pedestrian's position in the world, in
contrast to the image positions provided by most state-of-the-art methods for
perspective cameras. Our method avoids the need of pre-processing steps to
remove the distortion, which is computationally expensive. Beyond the novel
solution, our method compares favorably with the state-of-the-art methodologies
that do not consider the underlying distortion for the detection task.Comment: Accepted in 2019 IEEE Int'l Conf. Robotics and Automation (ICRA