195 research outputs found

    An Inertial Device-based User Interaction with Occlusion-free Object Handling in a Handheld Augmented Reality

    Get PDF
    Augmented Reality (AR) is a technology used to merge virtual objects with real environments in real-time. In AR, the interaction which occurs between the end-user and the AR system has always been the frequently discussed topic. In addition, handheld AR is a new approach in which it delivers enriched 3D virtual objects when a user looks through the device’s video camera. One of the most accepted handheld devices nowadays is the smartphones which are equipped with powerful processors and cameras for capturing still images and video with a range of sensors capable of tracking location, orientation and motion of the user. These modern smartphones offer a sophisticated platform for implementing handheld AR applications. However, handheld display provides interface with the interaction metaphors which are developed with head-mounted display attached along and it might restrict with hardware which is inappropriate for handheld. Therefore, this paper will discuss a proposed real-time inertial device-based interaction technique for 3D object manipulation. It also explains the methods used such for selection, holding, translation and rotation. It aims to improve the limitation in 3D object manipulation when a user can hold the device with both hands without requiring the need to stretch out one hand to manipulate the 3D object. This paper will also recap of previous works in the field of AR and handheld AR. Finally, the paper provides the experimental results to offer new metaphors to manipulate the 3D objects using handheld devices

    Learning Manipulation under Physics Constraints with Visual Perception

    Full text link
    Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. In this work, we consider the problem of autonomous block stacking and explore solutions to learning manipulation under physics constraints with visual perception inherent to the task. Inspired by the intuitive physics in humans, we first present an end-to-end learning-based approach to predict stability directly from appearance, contrasting a more traditional model-based approach with explicit 3D representations and physical simulation. We study the model's behavior together with an accompanied human subject test. It is then integrated into a real-world robotic system to guide the placement of a single wood block into the scene without collapsing existing tower structure. To further automate the process of consecutive blocks stacking, we present an alternative approach where the model learns the physics constraint through the interaction with the environment, bypassing the dedicated physics learning as in the former part of this work. In particular, we are interested in the type of tasks that require the agent to reach a given goal state that may be different for every new trial. Thereby we propose a deep reinforcement learning framework that learns policies for stacking tasks which are parametrized by a target structure.Comment: arXiv admin note: substantial text overlap with arXiv:1609.04861, arXiv:1711.00267, arXiv:1604.0006

    Learning Manipulation under Physics Constraints with Visual Perception

    No full text
    Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel objects and their configurations. In this work, we consider the problem of autonomous block stacking and explore solutions to learning manipulation under physics constraints with visual perception inherent to the task. Inspired by the intuitive physics in humans, we first present an end-to-end learning-based approach to predict stability directly from appearance, contrasting a more traditional model-based approach with explicit 3D representations and physical simulation. We study the model's behavior together with an accompanied human subject test. It is then integrated into a real-world robotic system to guide the placement of a single wood block into the scene without collapsing existing tower structure. To further automate the process of consecutive blocks stacking, we present an alternative approach where the model learns the physics constraint through the interaction with the environment, bypassing the dedicated physics learning as in the former part of this work. In particular, we are interested in the type of tasks that require the agent to reach a given goal state that may be different for every new trial. Thereby we propose a deep reinforcement learning framework that learns policies for stacking tasks which are parametrized by a target structure

    Robot Jenga: Autonomous and Strategic Block Extraction

    Get PDF
    © 2009 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper describes our successful implementation of a robot that autonomously and strategically removes multiple blocks from an unstable Jenga tower. We present an integrated strategy for perception, planning and control that achieves repeatable performance in this challenging physical domain. In contrast to previous implementations, we rely only on low-cost, readily available system components and use strategic algorithms to resolve system uncertainty. We present a three-stage planner for block extraction which considers block selection, extraction order, and physics-based simulation that evaluates removability. Existing vision techniques are combined in a novel sequence for the identification and tracking of blocks within the tower. Discussion of our approach is presented following experimental results on a 5-DOF robot manipulator

    Robotic hand augmentation drives changes in neural body representation

    Get PDF
    Humans have long been fascinated by the opportunities afforded through augmentation. This vision not only depends on technological innovations but also critically relies on our brain's ability to learn, adapt, and interface with augmentation devices. Here, we investigated whether successful motor augmentation with an extra robotic thumb can be achieved and what its implications are on the neural representation and function of the biological hand. Able-bodied participants were trained to use an extra robotic thumb (called the Third Thumb) over 5 days, including both lab-based and unstructured daily use. We challenged participants to complete normally bimanual tasks using only the augmented hand and examined their ability to develop hand-robot interactions. Participants were tested on a variety of behavioral and brain imaging tests, designed to interrogate the augmented hand's representation before and after the training. Training improved Third Thumb motor control, dexterity, and hand-robot coordination, even when cognitive load was increased or when vision was occluded. It also resulted in increased sense of embodiment over the Third Thumb. Consequently, augmentation influenced key aspects of hand representation and motor control. Third Thumb usage weakened natural kinematic synergies of the biological hand. Furthermore, brain decoding revealed a mild collapse of the augmented hand's motor representation after training, even while the Third Thumb was not worn. Together, our findings demonstrate that motor augmentation can be readily achieved, with potential for flexible use, reduced cognitive reliance, and increased sense of embodiment. Yet, augmentation may incur changes to the biological hand representation. Such neurocognitive consequences are crucial for successful implementation of future augmentation technologies

    Research And Development Of Industrial Integrated Robotic Workcell And Robotrun Software For Academic Curriculum

    Get PDF
    Robotic automation is consuming the laborious tasks performed by workers all over industry. The increasing demand for trained robotic engineers to implement and maintain industrial robots has led to the development of various courses in academia. Michigan Tech is a FANUC Authorized Certified Education Training Center for industrial robot training. This report discusses the research and development of an integrated robotic workcell consisting of three Fanuc robots, Allen Bradley programmable logic controller (PLC), Mini-Mover belt conveyor and Fanuc iR-vision system. The workcell allows students to explore an environment similar to industry and intended to be used for laboratory hands-on activities in two robotic courses: Real-time Robotic Systems and Industrial Robotic Vision System. To complement hands-on activities and to meet the need of educating robotics to those without access to physical robots, an open source robotic simulation software RobotRun has been created in collaboration with a faculty member and students from Computer Science department. The features and a few training examples on the software have also been presented

    Editorial for the Special Issue Recognition Robotics

    Get PDF
    Perception of the environment is an essential skill for robotic applications that interact with their surroundings. Alongside perception often comes the ability to recognize objects, people, or dynamic situations. This skill is of paramount importance in many use cases, from industrial to social robotics
    • …
    corecore