41,240 research outputs found
Knowledge Representation for Robots through Human-Robot Interaction
The representation of the knowledge needed by a robot to perform complex
tasks is restricted by the limitations of perception. One possible way of
overcoming this situation and designing "knowledgeable" robots is to rely on
the interaction with the user. We propose a multi-modal interaction framework
that allows to effectively acquire knowledge about the environment where the
robot operates. In particular, in this paper we present a rich representation
framework that can be automatically built from the metric map annotated with
the indications provided by the user. Such a representation, allows then the
robot to ground complex referential expressions for motion commands and to
devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP
201
Bluetooth low energy for autonomous human-robot interaction
© 2017 Copyright held by the owner/author(s).This demonstration shows how inexpensive, off-the-shelf, and unobtrusive Bluetooth Low Energy (BLE) devices can be utilized for enabling robots to recognize touch gestures, to perceive proximity information, and to distinguish between interacting individuals autonomously. The received signal strength (RSS) between the BLE device attached to the robot and BLE devices attached to the interacting individuals is used to achieve this. Almost no software configuration is needed and the setup can be applied to most everyday environments and robot platforms
Human-Robot Interaction
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center
Towards socially adaptive robots : A novel method for real time recognition of human-robot interaction styles
“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/ICHR.2008.4756004Automatically detecting different styles of play in human-robot interaction is a key challenge towards adaptive robots, i.e. robots that are able to regulate the interactions and adapt to different interaction styles of the robot users. In this paper we present a novel algorithm for pattern recognition in human-robot interaction, the Cascaded Information Bottleneck Method. We apply it to real-time autonomous recognition of human-robot interaction styles. This method uses an information theoretic approach and enables to progressively extract relevant information from time series. It relies on a cascade of bottlenecks, the bottlenecks being trained one after the other according to the existing Agglomerative Information Bottleneck Algorithm. We show that a structure for the bottleneck states along the cascade emerges and we introduce a measure to extrapolate unseen data. We apply this method to real-time recognition of Human-Robot Interaction Styles by a robot in a detailed case study. The algorithm has been implemented for real interactions between humans and a real robot. We demonstrate that the algorithm, which is designed to operate real time, is capable of classifying interaction styles, with a good accuracy and a very acceptable delay. Our future work will evaluate this method in scenarios on robot-assisted therapy for children with autism.Peer reviewe
- …
