Tracking of persons with camera-fusion technology

Abstract

The idea of a robot tracking and following a person is not new. Different combinations of laser range finders and camera pairings have been used for research on this subject. In the last years stereoscopic systems have been developed to compensate shortcomings of laser range finders or ultra sonic sensor arrays in means of 3D recognition. When Microsoft began the distribution of the Microsoft Kinect in the year 2010 they released a comparatively cheap system, that combines depth measurement and a color view in one device. Though the system was intended as a new remote controlling system for games, tackling the market launch of motion sensing wireless controllers such as the Nintendo Wiimote and Sony Playstation 3 move some developers saw more in this technology. And so it did not take long until the first hacks for the Microsoft Kinect were published after the initial release. More and more people started to create own software, ranging from shadowpuppets [TW10] to remote controlling home cinema systems [Nar11]. Microsoft and PrimeSense soon recognized the potential and released free drivers and sdks for the use of the camera device with PCs. With Prime Sense publishing the drivers as open source a lot of possible uses came up for the Microsoft Kinect. Some companies used this event to enter the market of camera-fusion technology. The most comparable system to the Microsoft Kinect is the Xtion Pro Live by Asus. These devices merging the depth measurement and color view with computation on a deviceinternal system reveal new possibilities concerning the tracking of persons and enabling them to even give the robot commands using gestures. This paper shall inquire to what extend the Microsoft Kinect or the Asus Xtion Pro Live can be used as substitute for stereoscopic cameras/laser range finder systems in context of a tracking and control device for human robot interaction scenarios with person following applications for service robots

    Similar works