6,930 research outputs found

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    A software framework for the development of projection-based augmented reality systems

    Get PDF
    Despite the large amount of methods and applications of augmented reality, there is little homogenization on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more concerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we present a software framework that can be used for the development of AR applications based on camera-projector pairs, that is suitable for both fixed, and nomadic setups.Peer ReviewedPostprint (author's final draft

    An Application of Augmented Reality (AR) in the Teaching of an Arc Welding Robot

    Get PDF
    Augmented Reality (AR) is an emerging technology that utilizes computer vision methods to overlay virtual objects onto the real world scene so as to make them appear to co-exist with the real objects. Its main objective is to enhance the user’s interaction with the real world by providing the right information needed to perform a certain task. Applications of this technology in manufacturing include maintenance, assembly and telerobotics. In this paper, we explore the potential of teaching a robot to perform an arc welding task in an AR environment. We present the motivation, features of a system using the popular ARToolkit package, and a discussion on the issues and implications of our research.Singapore-MIT Alliance (SMA

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this ïŹeld. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research
    • 

    corecore