research

Vision and distance integrated sensor (Kinect) for an autonomous robot

Abstract

This work presents an application of the Microsoft Kinect camera for an autonomous mobile robot. In order to drive autonomously one main issue is the ability to recognize signalling panels positioned overhead. The Kinect camera can be applied in this task due to its double integrated sensor, namely vision and distance. The vision sensor is used to perceive the signalling panel, while the distance sensor is applied as a segmentation filter, by eliminating pixels by their depth in the object’s background. The approach adopted to perceive the symbol from the signalling panel consists in: a) applying the depth image filter from the Kinect camera; b) applying Morphological Operators to segment the image; c) a classification is carried out with an Artificial Neural Network and a simple Multilayer Perceptron network that can correctly classify the image. This work explores the Kinect camera depth sensor and hence this filter avoids heavy computational algorithms to search for the correct location of the signalling panels. It simplifies the next tasks of image segmentation and classification. A mobile autonomous robot using this camera was used to recognize the signalling panels on a competition track of the Portuguese Robotics Open

    Similar works