3,406 research outputs found

    A Framework for Interactive Teaching of Virtual Borders to Mobile Robots

    Full text link
    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot's workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure

    This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer

    Full text link
    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    The Virtual University and Avatar Technology: E-learning Through Future Technology

    Get PDF
    E-learning gains increasingly importance in academic education. Beyond present distance learning technologies a new opportunity emerges by the use of advanced avatar technology. Virtual robots acting in an environment of a virtual campus offer opportunities of advanced learning experiences. Human Machine Interaction (HMI) and Artificial Intelligence (AI) can bridge time zones and ease professional constraints of mature students. Undergraduate students may use such technology to build up topics of their studies beyond taught lectures. Objectives of the paper are to research the options, extent and limitations of avatar technology for academic studies in under- and postgraduate courses and to discuss students' potential acceptance or rejection of interaction with AI. The research method is a case study based on Sir Tony Dyson's avatar technology iBot2000. Sir Tony is a worldwide acknowledged robot specialist, creator of Star Wars' R2D2, who developed in recent years the iBot2000 technology, intelligent avatars adaptable to different environments with the availability to speak up to eight different languages and capable to provide logic answers to questions asked. This technology underwent many prototypes with the latest specific goal to offer blended E-learning entering the field of the virtual 3-D university extending Web2.0 to Web3.0 (Dyson. 2009). Sir Tony included his vast experiences gained in his personal (teaching) work with children for which he received his knighthood. The data was mainly collected through interviews with Sir Tony Dyson, which helps discover the inventor’s view on why such technology is of advantage for academic studies. Based on interviews with Sir Tony, this research critically analyses the options, richness and restrictions, which avatar (iBot2000) technology may add to academic studies. The conclusion will discuss the opportunities, which avatar technology may be able to bring to learning and teaching activities, and the foreseeable limitations – the amount of resources required and the complexity to build a fully integrated virtual 3-D campus. Key Words: virtual learning, avatar technology, iBot2000, virtual universit

    An internet of laboratory things

    Get PDF
    By creating “an Internet of Laboratory Things” we have built a blend of real and virtual laboratory spaces that enables students to gain practical skills necessary for their professional science and engineering careers. All our students are distance learners. This provides them by default with the proving ground needed to develop their skills in remotely operating equipment, and collaborating with peers despite not being co-located. Our laboratories accommodate state of the art research grade equipment, as well as large-class sets of off-the-shelf work stations and bespoke teaching apparatus. Distance to the student is no object and the facilities are open all hours. This approach is essential for STEM qualifications requiring development of practical skills, with higher efficiency and greater accessibility than achievable in a solely residential programme

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    Virtual reality interfaces for seamless interaction with the physical reality

    Get PDF
    In recent years head-mounted displays (HMDs) for virtual reality (VR) have made the transition from research to consumer product, and are increasingly used for productive purposes such as 3D modeling in the automotive industry and teleconferencing. VR allows users to create and experience real-world like models of products; and enables users to have an immersive social interaction with distant colleagues. These solutions are a promising alternative to physical prototypes and meetings, as they require less investment in time and material. VR uses our visual dominance to deliver these experiences, making users believe that they are in another reality. However, while their mind is present in VR their body is in the physical reality. From the user’s perspective, this brings considerable uncertainty to the interaction. Currently, they are forced to take off their HMD in order to, for example, see who is observing them and to understand whether their physical integrity is at risk. This disrupts their interaction in VR, leading to a loss of presence – a main quality measure for the success of VR experiences. In this thesis, I address this uncertainty by developing interfaces that enable users to stay in VR while supporting their awareness of the physical reality. They maintain this awareness without having to take off the headset – which I refer to as seamless interaction with the physical reality. The overarching research vision that guides this thesis is, therefore, to reduce this disconnect between the virtual and physical reality. My research is motivated by a preliminary exploration of user uncertainty towards using VR in co-located, public places. This exploration revealed three main foci: (a) security and privacy, (b) communication with physical collaborators, and (c) managing presence in both the physical and virtual reality. Each theme represents a section in my dissertation, in which I identify central challenges and give directions towards overcoming them as have emerged from the work presented here. First, I investigate security and privacy in co-located situations by revealing to what extent bystanders are able to observe general tasks. In this context, I explicitly investigate the security considerations of authentication mechanisms. I review how existing authentication mechanisms can be transferred to VR and present novel approaches that are more usable and secure than existing solutions from prior work. Second, to support communication between VR users and physical collaborators, I add to the field design implications for VR interactions that enable observers to choose opportune moments to interrupt HMD users. Moreover, I contribute methods for displaying interruptions in VR and discuss their effect on presence and performance. I also found that different virtual presentations of co-located collaborators have an effect on social presence, performance and trust. Third, I close my thesis by investigating methods to manage presence in both the physical and virtual realities. I propose systems and interfaces for transitioning between them that empower users to decide how much they want to be aware of the other reality. Finally, I discuss the opportunity to systematically allocate senses to these two realities: the visual one for VR and the auditory and haptic one for the physical reality. Moreover, I provide specific design guidelines on how to use these findings to alert VR users about physical borders and obstacles.In den letzten Jahren haben Head-Mounted-Displays (HMDs) für virtuelle Realität (VR) den Übergang von der Forschung zum Konsumprodukt vollzogen und werden zunehmend für produktive Zwecke, wie 3D-Modellierung in der Automobilindustrie oder Telekonferenzen, eingesetzt. VR ermöglicht es den Benutzern, schnell und kostengünstig, Prototypen zu erstellen und erlaubt eine immersive soziale Interaktion mit entfernten Kollegen. VR nutzt unsere visuelle Dominanz, um diese Erfahrungen zu vermitteln und gibt Benutzern das Gefühl sich in einer anderen Realität zu befinden. Während der Nutzer jedoch in der virtuellen Realität mental präsent ist, befindet sich der Körper weiterhin in der physischen Realität. Aus der Perspektive des Benutzers bringt dies erhebliche Unsicherheit in die Nutzung von HMDs. Aktuell sind Nutzer gezwungen, ihr HMD abzunehmen, um zu sehen, wer sie beobachtet und zu verstehen, ob ihr körperliches Wohlbefinden gefährdet ist. Dadurch wird ihre Interaktion in der VR gestört, was zu einem Verlust der Präsenz führt - ein Hauptqualitätsmaß für den Erfolg von VR-Erfahrungen. In dieser Arbeit befasse ich mich mit dieser Unsicherheit, indem ich Schnittstellen entwickle, die es den Nutzern ermöglichen, in VR zu bleiben und gleichzeitig unterstützen sie die Wahrnehmung für die physische Realität. Sie behalten diese Wahrnehmung für die physische Realität bei, ohne das Headset abnehmen zu müssen - was ich als nahtlose Interaktion mit der physischen Realität bezeichne. Daher ist eine übergeordenete Vision von meiner Forschung diese Trennung von virtueller und physicher Realität zu reduzieren. Meine Forschung basiert auf einer einleitenden Untersuchung, die sich mit der Unsicherheit der Nutzer gegenüber der Verwendung von VR an öffentlichen, geteilten Orten befasst. Im Kontext meiner Arbeit werden Räume oder Flächen, die mit anderen ortsgleichen Menschen geteilt werden, als geteilte Orte bezeichnet. Diese Untersuchung ergab drei Hauptschwerpunkte: (1) Sicherheit und Privatsphäre, (2) Kommunikation mit physischen Kollaborateuren, und (3) Umgang mit der Präsenz, sowohl in der physischen als auch in der virtuellen Realität. Jedes Thema stellt einen Fokus in meiner Dissertation dar, in dem ich zentrale Herausforderungen identifiziere und Lösungsansätze vorstelle. Erstens, untersuche ich Sicherheit und Privatsphäre an öffentlichen, geteilten Orten, indem ich aufdecke, inwieweit Umstehende in der Lage sind, allgemeine Aufgaben zu beobachten. In diesem Zusammenhang untersuche ich explizit die Gestaltung von Authentifizierungsmechanismen. Ich untersuche, wie bestehende Authentifizierungsmechanismen auf VR übertragen werden können, und stelle neue Ansätze vor, die nutzbar und sicher sind. Zweitens, um die Kommunikation zwischen HMD-Nutzern und Umstehenden zu unterstützen, erweitere ich das Forschungsfeld um VR-Interaktionen, die es Beobachtern ermöglichen, günstige Momente für die Unterbrechung von HMD-Nutzern zu wählen. Darüber hinaus steuere ich Methoden zur Darstellung von Unterbrechungen in VR bei und diskutiere ihre Auswirkungen auf Präsenz und Leistung von Nutzern. Meine Arbeit brachte auch hervor, dass verschiedene virtuelle Präsentationen von ortsgleichen Kollaborateuren einen Effekt auf die soziale Präsenz, Leistung und Vertrauen haben. Drittens, schließe ich meine Dissertation mit der Untersuchung von Methoden zur Verwaltung der Präsenz, sowohl in der physischen als auch in der virtuellen Realität ab. Ich schlage Systeme und Schnittstellen für den Übergang zwischen den Realitäten vor, die die Benutzer in die Lage versetzen zu entscheiden, inwieweit sie sich der anderen Realität bewusst sein wollen. Schließlich diskutiere ich die Möglichkeit, diesen beiden Realitäten systematisch Sinne zuzuordnen: die visuelle für VR und die auditive und haptische für die physische Realität. Darüber hinaus stelle ich spezifische Design-Richtlinien zur Verfügung, wie diese Erkenntnisse genutzt werden können, um VR-Anwender auf physische Grenzen und Hindernisse aufmerksam zu machen
    • …
    corecore