1,791 research outputs found
Indoor assistance for visually impaired people using a RGB-D camera
In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
Fast and Robust Detection of Fallen People from a Mobile Robot
This paper deals with the problem of detecting fallen people lying on the
floor by means of a mobile robot equipped with a 3D depth sensor. In the
proposed algorithm, inspired by semantic segmentation techniques, the 3D scene
is over-segmented into small patches. Fallen people are then detected by means
of two SVM classifiers: the first one labels each patch, while the second one
captures the spatial relations between them. This novel approach showed to be
robust and fast. Indeed, thanks to the use of small patches, fallen people in
real cluttered scenes with objects side by side are correctly detected.
Moreover, the algorithm can be executed on a mobile robot fitted with a
standard laptop making it possible to exploit the 2D environmental map built by
the robot and the multiple points of view obtained during the robot navigation.
Additionally, this algorithm is robust to illumination changes since it does
not rely on RGB data but on depth data. All the methods have been thoroughly
validated on the IASLAB-RGBD Fallen Person Dataset, which is published online
as a further contribution. It consists of several static and dynamic sequences
with 15 different people and 2 different environments
- …