2,146 research outputs found

    Robot graphic simulation testbed

    Get PDF
    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts

    Human-Robot Collaboration Enabled By Real-Time Vision Tracking

    Get PDF
    The number of robotic systems in the world is growing rapidly. However, most industrial robots are isolated in caged environments for the safety of users. There is an urgent need for human-in-the-loop collaborative robotic systems since robots are very good at performing precise and repetitive tasks but lack the cognitive ability and soft skills of humans. To fill this need, a key challenge is how to enable a robot to interpret its human co-worker’s motion and intention. This research addresses this challenge by developing a collaborative human-robot interface via innovations in computer vision, robotics, and system integration techniques. Specifically, this work integrates a holistic framework of cameras, motion sensors, and a 7-degree-of-freedom robotic manipulator controlled by vision data processing and motion planning algorithms implemented in the open-source robotics middleware Robot Operating System (ROS)

    Multi-modal interface for offline robot programming

    Get PDF
    This thesis presents an approach for improving robot offline programming using input methods based on the human natural skills. The approach is focused to teach basic assembly and manipulation operations using a pair of industrial robots in an existing simulation environment and is meant to be improved in future works, that are also proposed in this thesis. In order to develop this approach, and regarding the available resources, an Add-In for the simulation and offline programming software RobotStudio was developed. This Add-In combines human pose, a graphical user interface and optionally speech to teach the robot a sequence of targets, along with the simulation environment, to automatically generate instructions. Two different kinds of sensors, Kinect and Leap Motion Sensor have been evaluated based on references in order to select the most suitable one for the implementation of this work. The executions of the programmed instructions have been evaluated in simulation.Este trabajo presenta una propuesta para mejorar al programación de robots fuera de línea usando métodos de entrada basados en habilidades humanas naturales. La propuesta se enfoca en enseñar operaciones básicas de ensamblaje y manipulación, utilizando un par de robots industriales en un entorno de simulación ya existente y se dispone para ser mejorado en trabajos futuros, los cuales también se proponen en este trabajo. Con el fin de desarrollar esta propuesta y teniendo en cuenta los recursos disponibles, se ha desarrollado un Add-In para el programa de simulación y programación fuera de línea Robot Studio. Este Add-In combina pose humana, una interfaz gráfica de usuario y opcionalmente habla para enseñar al robot una secuencia de objetivos junto con el entorno de simulación para automáticamente generar instrucciones. Dos diferentes tipos de sensores, Kinect y Leap Motion Sensor han sido evaluados en base a referencias para seleccionar el más adecuado para la implementación de este trabajo. Las ejecuciones de las instrucciones programadas han sido evaluadas en simulación.Máster Universitario en Ingeniería Industrial (M141

    A computer-based training system combining virtual reality and multimedia

    Get PDF
    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system

    Graphics Technology in Space Applications (GTSA 1989)

    Get PDF
    This document represents the proceedings of the Graphics Technology in Space Applications, which was held at NASA Lyndon B. Johnson Space Center on April 12 to 14, 1989 in Houston, Texas. The papers included in these proceedings were published in general as received from the authors with minimum modifications and editing. Information contained in the individual papers is not to be construed as being officially endorsed by NASA

    CONTROLLING OF AN INDUSTRIAL ROBOTIC ARM

    Get PDF
    Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed
    corecore