Enhancing Robot Perception Using Human Teammates * (Extended Abstract)

Abstract

ABSTRACT In robotics research, perception is one of the most challenging tasks. In contrast to existing approaches that rely only on computer vision, we propose an alternative method for improving perception by learning from human teammates. To evaluate, we apply this idea to a door detection problem. A set of preliminary experiments has been completed using software agents with real vision data. Our results demonstrate that information inferred from teammate observations significantly improves the perception precision. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Human Factors Keywords Robot perception, robot-human hybrid teams BACKGROUND Robot perception is generally formulated as a problem of analyzing and interpreting various sensory inputs, e.g., camera feeds. In this paper, we approach robot perception from a completely different direction. Our approach utilizes a team setting where a robot collaborates with human teammates. Motivated by the fact that humans possess superior perception skills relative to their robotic counterparts, we investigate how a robot can take advantage of its teammate's perfect vision. In general, an agent acquires new information through perception, and in turn, the agent chooses actions based on the information acquired. Let us suppose that a robot has a mental model of its human teammate such that a causal relationship is specified between information and actions. Then, by understanding the human mental model of such decision making (or planning), the robot can infer what the human teammate has seen based on the human's behavior. In other words, an observation of a human teammate can be * This work was conducted (in part) through collaborative participation in the Robotics Consortium sponsored by the U. used as evidence to infer the information perceived by the human. This, in turn, can be used to reduce uncertainty in robot perception. In this paper, we specifically focus on a motivating problem of door detection in the following scenario. Consider a team consisting of a robot and a human performing a military operation in a hostile environment. According to intelligence, armed insurgents are hiding in an urban street. The team is deployed to cover the buildings in the surrounding area, focusing on doors from which the insurgents may try to egress. This is a stealth operation. We make two specific assumptions that are reasonable in a team context. First, observing a teammate is generally more manageable than perceiving an unfamiliar environment. Second, team members share common objectives in reaching the team's goals. PERCEPTION USING VISION This section describes a purely camera-based approach. First, we find a likely semantic image segmentation using a computer vision technique called stacked hierarchical labeling It is not constrained by shape grammars and can model a more general class of objects, but its method of constructing a hierarchical segmentation does not convey semantic meaning at a finer detail, as would be necessary to detect doors on a building. It is, however, reliable in detecting buildings as a whole, significantly reducing the search space for detecting doors in the next step. Once buildings are identified, we can apply a broad feature detector to detect likely openings on the façade of the building. As i

    Similar works