2,497 research outputs found
Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces
This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics
Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI
This paper introduces Sketched Reality, an approach that combines AR
sketching and actuated tangible user interfaces (TUI) for bidirectional
sketching interaction. Bi-directional sketching enables virtual sketches and
physical objects to "affect" each other through physical actuation and digital
computation. In the existing AR sketching, the relationship between virtual and
physical worlds is only one-directional -- while physical interaction can
affect virtual sketches, virtual sketches have no return effect on the physical
objects or environment. In contrast, bi-directional sketching interaction
allows the seamless coupling between sketches and actuated TUIs. In this paper,
we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching
tool to demonstrate the concept. In our system, virtual sketches drawn and
simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move,
actuate, collide, and constrain physical Toio robots, as if virtual sketches
and the physical objects exist in the same space through seamless coupling
between AR and robot motion. This paper contributes a set of novel interactions
and a design space of bi-directional AR sketching. We demonstrate a series of
potential applications, such as tangible physics education, explorable
mechanism, tangible gaming for children, and in-situ robot programming via
sketching.Comment: UIST 202
Human-Machine Interfaces for Service Robotics
L'abstract è presente nell'allegato / the abstract is in the attachmen
Whisking with robots from rat vibrissae to biomimetic technology for active touch
This article summarizes some of the key features of the rat vibrissal system, including the actively controlled sweeping movements of the vibrissae known as whisking, and reviews the past and ongoing research aimed at replicating some of this functionality in biomimetic robots
ShapeBots: Shape-changing Swarm Robots
We introduce shape-changing swarm robots. A swarm of self-transformable
robots can both individually and collectively change their configuration to
display information, actuate objects, act as tangible controllers, visualize
data, and provide physical affordances. ShapeBots is a concept prototype of
shape-changing swarm robots. Each robot can change its shape by leveraging
small linear actuators that are thin (2.5 cm) and highly extendable (up to
20cm) in both horizontal and vertical directions. The modular design of each
actuator enables various shapes and geometries of self-transformation. We
illustrate potential application scenarios and discuss how this type of
interface opens up possibilities for the future of ubiquitous and distributed
shape-changing interfaces.Comment: UIST 201
Doctor of Philosophy
dissertationWe propose to examine a representation which features combined action and perception signals, i.e., instead of having a purely geometric representation of the perceptual data, we include the motor actions, e.g., aiming a camera at an object, which are al
BWIBots: A platform for bridging the gap between AI and human–robot interaction research
Recent progress in both AI and robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially human–robot interaction for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (a) execute action sequences to complete user requests, (b) efficiently ask questions to resolve user requests, (c) understand human commands given in natural language, and (d) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform
- …