1,335 research outputs found

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer

    Full text link
    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours

    A Framework for Interactive Teaching of Virtual Borders to Mobile Robots

    Full text link
    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot's workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure

    Mobile Pointing Task in the Physical World: Balancing Focus and Performance while Disambiguating

    Get PDF
    International audienceWe address the problem of mobile distal selection of physical objects when pointing at them in augmented environments. We focus on the disambiguation step needed when several objects are selected with a rough pointing gesture. A usual disambiguation technique forces the users to switch their focus from the physical world to a list displayed on a handheld device's screen. In this paper, we explore the balance between change of users' focus and performance. We present two novel interaction techniques allowing the users to maintain their focus in the physical world. Both use a cycling mechanism, respectively performed with a wrist rolling gesture for P2Roll or with a finger sliding gesture for P2Slide. A user experiment showed that keeping users' focus in the physical world outperforms techniques that require the users to switch their focus to a digital representation distant from the physical objects, when disambiguating up to 8 objects

    Control Framework for Hand-Arm Coordination

    Get PDF
    Co-ordination of multiple manipulators requires cooperation at several levels in the control hierarchy. A distributed processing environment with no hardware dependencies except at the motor servo level, would provide a flexible architecture for coordination. A system on these lines is being built to control an articulated hand and an arm. The four levels of control envisaged include a task decomposition level, a planning level, a scheduling level and a server level. The hand will carry both force and tactile sensors, feedback from these are used to provide adaptive control in grasping tasks. The processing of the sensory information is performed by independent processes, with analyzed information being sent to the relevant layer of the system. The manipulators are also controlled by individual processes. All process can open communications with an active process sending commands or data, or receiving them. We describe the scope of the system and the current setup plus future lines of development

    Touchless Typing using Head Movement-based Gestures

    Full text link
    Physical contact-based typing interfaces are not suitable for people with upper limb disabilities such as Quadriplegia. This paper, thus, proposes a touch-less typing interface that makes use of an on-screen QWERTY keyboard and a front-facing smartphone camera mounted on a stand. The keys of the keyboard are grouped into nine color-coded clusters. Users pointed to the letters that they wanted to type just by moving their head. The head movements of the users are recorded by the camera. The recorded gestures are then translated into a cluster sequence. The translation module is implemented using CNN-RNN, Conv3D, and a modified GRU based model that uses pre-trained embedding rich in head pose features. The performances of these models were evaluated under four different scenarios on a dataset of 2234 video sequences collected from 22 users. The modified GRU-based model outperforms the standard CNN-RNN and Conv3D models for three of the four scenarios. The results are encouraging and suggest promising directions for future research.Comment: *The two lead authors contributed equally. The dataset and code are available upon request. Please contact the last autho

    Designing Disambiguation Techniques for Pointing in the Physical World

    Get PDF
    International audienceSeveral ways for selecting physical objects exist, including touching and pointing at them. Allowing the user to interact at a distance by pointing at physical objects can be challenging when the environment contains a large number of interactive physical objects, possibly occluded by other everyday items. Previous pointing techniques highlighted the need for disambiguation techniques. Addressing this challenge, this paper contributes a design space that organizes along groups and axes a set of options for designers to relevantly (1) describe, (2) classify, and (3) design disambiguation techniques. First, we have not found techniques in the literature yet that our design space could not describe. Second, all the techniques show a different path along the axes of our design space. Third, it allows defining of several new paths/solutions that have not yet been explored. We illustrate this generative power with the example of such a designed technique, Physical Pointing Roll (P2Roll)
    • …
    corecore