35,034 research outputs found
Virtual Reality based Telerobotics Framework with Depth Cameras
This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single external static camera, in-hand camera, in-hand and external static camera, in-hand camera with OctoMap occupancy mapping. The latter option provided the operator with a better understanding of the remote environment whilst requiring relatively small communication bandwidth. Consequently, we propose suitable grasping methods compatible with the VR based teleoperation with the in-hand camera. Video demonstration: https://youtu.be/3vZaEykMS_E
A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Picking up objects requested by a human user is a common task in human-robot
interaction. When multiple objects match the user's verbal description, the
robot needs to clarify which object the user is referring to before executing
the action. Previous research has focused on perceiving user's multimodal
behaviour to complement verbal commands or minimising the number of follow up
questions to reduce task time. In this paper, we propose a system for reference
disambiguation based on visualisation and compare three methods to disambiguate
natural language instructions. In a controlled experiment with a YuMi robot, we
investigated real-time augmentations of the workspace in three conditions --
mixed reality, augmented reality, and a monitor as the baseline -- using
objective measures such as time and accuracy, and subjective measures like
engagement, immersion, and display interference. Significant differences were
found in accuracy and engagement between the conditions, but no differences
were found in task time. Despite the higher error rates in the mixed reality
condition, participants found that modality more engaging than the other two,
but overall showed preference for the augmented reality condition over the
monitor and mixed reality conditions
The virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning
The research presented in this paper aims at investigating user interaction in immersive virtual learning environments (VLEs), focusing on the role and the effect of interactivity on conceptual learning. The goal has been to examine if the learning of young users improves through interacting in (i.e. exploring, reacting to, and acting upon) an immersive virtual environment (VE) compared to non interactive or non-immersive environments. Empirical work was carried out with more than 55 primary school students between the ages of 8 and 12, in different between-group experiments: an exploratory study, a pilot study, and a large-scale experiment. The latter was conducted in a virtual environment designed to simulate a playground. In this āVirtual Playgroundā, each participant was asked to complete a set of tasks designed to address arithmetical āfractionsā problems. Three different conditions, two experimental virtual reality (VR) conditions and a non-VR condition, that varied the levels of activity and interactivity, were designed to evaluate how children accomplish the various tasks. Pre-tests, post-tests, interviews, video, audio, and log files were collected for each participant, and analyzed both quantitatively and qualitatively. This paper presents a selection of case studies extracted from the qualitative analysis, which illustrate the variety of approaches taken by children in the VEs in response to visual cues and system feedback. Results suggest that the fully interactive VE aided children in problem solving but did not provide as strong evidence of conceptual change as expected; rather, it was the passive VR environment, where activity was guided by a virtual robot, that seemed to support student reflection and recall, leading to indications of conceptual change
The Virtual University and Avatar Technology: E-learning Through Future Technology
E-learning gains increasingly importance in academic education. Beyond present distance learning technologies a new opportunity emerges by the use of advanced avatar technology. Virtual robots acting in an environment of a virtual campus offer opportunities of advanced learning experiences. Human Machine Interaction (HMI) and Artificial Intelligence (AI) can bridge time zones and ease professional constraints of mature students. Undergraduate students may use such technology to build up topics of their studies beyond taught lectures.
Objectives of the paper are to research the options, extent and limitations of avatar technology for academic studies in under- and postgraduate courses and to discuss students' potential acceptance or rejection of interaction with AI.
The research method is a case study based on Sir Tony Dyson's avatar technology iBot2000. Sir Tony is a worldwide acknowledged robot specialist, creator of Star Wars' R2D2, who developed in recent years the iBot2000 technology, intelligent avatars adaptable to different environments with the availability to speak up to eight different languages and capable to provide logic answers to questions asked. This technology underwent many prototypes with the latest specific goal to offer blended E-learning entering the field of the virtual 3-D university extending Web2.0 to Web3.0 (Dyson. 2009). Sir Tony included his vast experiences gained in his personal (teaching) work with children for which he received his knighthood. The data was mainly collected through interviews with Sir Tony Dyson, which helps discover the inventorās view on why such technology is of advantage for academic studies.
Based on interviews with Sir Tony, this research critically analyses the options, richness and restrictions, which avatar (iBot2000) technology may add to academic studies. The conclusion will discuss the opportunities, which avatar technology may be able to bring to learning and teaching activities, and the foreseeable limitations ā the amount of resources required and the complexity to build a fully integrated virtual 3-D campus.
Key Words: virtual learning, avatar technology, iBot2000, virtual universit
The perception of emotion in artificial agents
Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
- ā¦