5,505 research outputs found
The Whole World in Your Hand: Active and Interactive Segmentation
Object segmentation is a fundamental problem
in computer vision and a powerful resource for
development. This paper presents three embodied approaches to the visual segmentation of objects. Each approach to segmentation is aided
by the presence of a hand or arm in the proximity of the object to be segmented. The first
approach is suitable for a robotic system, where
the robot can use its arm to evoke object motion. The second method operates on a wearable system, viewing the world from a human's
perspective, with instrumentation to help detect
and segment objects that are held in the wearer's
hand. The third method operates when observing
a human teacher, locating periodic motion (finger/arm/object waving or tapping) and using it
as a seed for segmentation. We show that object segmentation can serve as a key resource for
development by demonstrating methods that exploit high-quality object segmentations to develop
both low-level vision capabilities (specialized feature detectors) and high-level vision capabilities
(object recognition and localization)
Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks
A major challenge for the realization of intelligent robots is to supply them
with cognitive abilities in order to allow ordinary users to program them
easily and intuitively. One way of such programming is teaching work tasks by
interactive demonstration. To make this effective and convenient for the user,
the machine must be capable to establish a common focus of attention and be
able to use and integrate spoken instructions, visual perceptions, and
non-verbal clues like gestural commands. We report progress in building a
hybrid architecture that combines statistical methods, neural networks, and
finite state machines into an integrated system for instructing grasping tasks
by man-machine interaction. The system combines the GRAVIS-robot for visual
attention and gestural instruction with an intelligent interface for speech
recognition and linguistic interpretation, and an modality fusion module to
allow multi-modal task-oriented man-machine communication with respect to
dextrous robot manipulation of objects.Comment: 7 pages, 8 figure
Multisensor-based human detection and tracking for mobile service robots
The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms.
Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments
- …