2,751 research outputs found

    Effects of Control Device and Task Complexity on Performance and Task Shedding During a Robotic Arm Task

    Get PDF
    The use of robotic arms across domains is increasing, but the relationship between control features and performance is not fully understood. The goal of this research was to investigate the difference in task performance when using two different control devices at high and low task complexities when participants can shed tasks to automation. In this experiment, 40 undergraduates (24 females) used two control devices, a Leap Motion controller and an Xbox controller, to teleoperate a robotic arm in a high or low complexity peg placement task. Simultaneously, participants were tasked with scanning images for tanks. During the experiment, participants had the option to task shed the peg task to imperfect automation. Analyses indicated a significant main effect of control device on task completion rate and time to first grasp the peg, with completion rate higher and time lower when using the Leap. However, participants made significantly more errors with the Leap Motion controller than with the Xbox controller. Participants in both conditions task shed similarly with both control devices and task shed at similar times. The 2 x 2 mixed ANOVAs somewhat supported the proposed hypotheses. The results of this study indicate that control device impacts performance on a robotic arm task. The Leap Motion controller supports increased task completion rate and quicker peg grasps in high and low task complexity when compared with the Xbox controller. This supports the extension of Control Order Theory into three-dimensional space and suggests that the Leap Motion controller can be implemented in some domains. However, the criticality and frequency of errors should be carefully considered

    Safe, Remote-Access Swarm Robotics Research on the Robotarium

    Get PDF
    This paper describes the development of the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-agent research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium as well as connects these to the particular considerations one must take when making complex hardware remotely accessible. In particular, safety must be built in already at the design phase without overly constraining which coordinated control programs the users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.Comment: 13 pages, 7 figures, 3 code samples, 72 reference

    Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

    Full text link
    We propose a technique for multi-task learning from demonstration that trains the controller of a low-cost robotic arm to accomplish several complex picking and placing tasks, as well as non-prehensile manipulation. The controller is a recurrent neural network using raw images as input and generating robot arm trajectories, with the parameters shared across the tasks. The controller also combines VAE-GAN-based reconstruction with autoregressive multimodal action prediction. Our results demonstrate that it is possible to learn complex manipulation tasks, such as picking up a towel, wiping an object, and depositing the towel to its previous position, entirely from raw images with direct behavior cloning. We show that weight sharing and reconstruction-based regularization substantially improve generalization and robustness, and training on multiple tasks simultaneously increases the success rate on all tasks

    The Penn Jerboa: A Platform for Exploring Parallel Composition of Templates

    Get PDF
    We have built a 12DOF, passive-compliant legged, tailed biped actuated by four brushless DC motors. We anticipate that this machine will achieve varied modes of quasistatic and dynamic balance, enabling a broad range of locomotion tasks including sitting, standing, walking, hopping, running, turning, leaping, and more. Achieving this diversity of behavior with a single under-actuated body, requires a correspondingly diverse array of controllers, motivating our interest in compositional techniques that promote mixing and reuse of a relatively few base constituents to achieve a combinatorially growing array of available choices. Here we report on the development of one important example of such a behavioral programming method, the construction of a novel monopedal sagittal plane hopping gait through parallel composition of four decoupled 1DOF base controllers. For this example behavior, the legs are locked in phase and the body is fastened to a boom to restrict motion to the sagittal plane. The platform's locomotion is powered by the hip motor that adjusts leg touchdown angle in flight and balance in stance, along with a tail motor that adjusts body shape in flight and drives energy into the passive leg shank spring during stance. The motor control signals arise from the application in parallel of four simple, completely decoupled 1DOF feedback laws that provably stabilize in isolation four corresponding 1DOF abstract reference plants. Each of these abstract 1DOF closed loop dynamics represents some simple but crucial specific component of the locomotion task at hand. We present a partial proof of correctness for this parallel composition of template reference systems along with data from the physical platform suggesting these templates are anchored as evidenced by the correspondence of their characteristic motions with a suitably transformed image of traces from the physical platform.Comment: Technical Report to Accompany: A. De and D. Koditschek, "Parallel composition of templates for tail-energized planar hopping," in 2015 IEEE International Conference on Robotics and Automation (ICRA), May 2015. v2: Used plain latex article, correct gap radius and specific force/torque number

    Semi-automatic Design for Disassembly Strategy Planning: An Augmented Reality Approach

    Get PDF
    Abstract The mounting attention to environmental issues requires adopting better disassembly procedures at the product's End of Life. Planning and reckoning different disassembly strategies in the early stage of the design process can improve the development of sustainable products with an easy dismissing and recycling oriented approach. Nowadays many Computer Aided Process Planning software packages provide optimized assembly or disassembly sequences, but they are mainly based on a time and cost compression approach, neglecting the human factor. The environment we developed is based upon the integration of a CAD, an Augmented Reality tool, a Leap Motion Controller device, see-through glasses and an algorithm for disassembly strategies evaluation: this approach guarantees a more effective interaction with the 3D real and virtual assembly than an approach relying only on a CAD based disassembly sequence planning. In such a way, the operator may not test in a more natural and intuitive way automatic disassembly sequences, but he/she can also propose different strategies to improve the ergonomics. The methodology has been tested in a real case study to evaluate the strength points and criticalities of this approach

    Exploring Alternative Control Modalities for Unmanned Aerial Vehicles

    Get PDF
    Unmanned aerial vehicles (UAVs), commonly known as drones, are defined by the International Civil Aviation Organization (ICAO) as an aircraft without a human pilot on board. They are currently utilized primarily in the defense and security sectors but are moving towards the general market in surprisingly powerful and inexpensive forms. While drones are presently restricted to non-commercial recreational use in the USA, it is expected that they will soon be widely adopted for both commercial and consumer use. Potentially, UAVs can revolutionize various business sectors including private security, agricultural practices, product transport and maybe even aerial advertising. Business Insider foresees that 12% of the expected $98 billion cumulative global spending on aerial drones through the following decade will be for business purposes.[28] At the moment, most drones are controlled by some sort of classic joystick or multitouch remote controller. While drone manufactures have improved the overall controllability of their products, most drones shipped today are still quite challenging for inexperienced users to pilot. In order to help mitigate the controllability challenges and flatten the learning curve, gesture controls can be utilized to improve piloting UAVs. The purpose of this study was to develop and evaluate an improved and more intuitive method of flying UAVs by supporting the use of hand gestures, and other non-traditional control modalities. The goal was to employ and test an end-to-end UAV system that provides an easy-to-use control interface for novice drone users. The expectation was that by implementing gesture-based navigation, the novice user will have an overall enjoyable and safe experience quickly learning how to navigate a drone with ease, and avoid losing or damaging the vehicle while they are on the initial learning curve. During the course of this study we have learned that while this approach does offer lots of promise, there are a number of technical challenges that make this problem much more challenging than anticipated. This thesis details our approach to the problem, analyzes the user data we collected, and summarizes the lessons learned

    The development of a human-robot interface for industrial collaborative system

    Get PDF
    Industrial robots have been identified as one of the most effective solutions for optimising output and quality within many industries. However, there are a number of manufacturing applications involving complex tasks and inconstant components which prohibit the use of fully automated solutions in the foreseeable future. A breakthrough in robotic technologies and changes in safety legislations have supported the creation of robots that coexist and assist humans in industrial applications. It has been broadly recognised that human-robot collaborative systems would be a realistic solution as an advanced production system with wide range of applications and high economic impact. This type of system can utilise the best of both worlds, where the robot can perform simple tasks that require high repeatability while the human performs tasks that require judgement and dexterity of the human hands. Robots in such system will operate as “intelligent assistants”. In a collaborative working environment, robot and human share the same working area, and interact with each other. This level of interface will require effective ways of communication and collaboration to avoid unwanted conflicts. This project aims to create a user interface for industrial collaborative robot system through integration of current robotic technologies. The robotic system is designed for seamless collaboration with a human in close proximity. The system is capable to communicate with the human via the exchange of gestures, as well as visual signal which operators can observe and comprehend at a glance. The main objective of this PhD is to develop a Human-Robot Interface (HRI) for communication with an industrial collaborative robot during collaboration in proximity. The system is developed in conjunction with a small scale collaborative robot system which has been integrated using off-the-shelf components. The system should be capable of receiving input from the human user via an intuitive method as well as indicating its status to the user ii effectively. The HRI will be developed using a combination of hardware integrations and software developments. The software and the control framework were developed in a way that is applicable to other industrial robots in the future. The developed gesture command system is demonstrated on a heavy duty industrial robot

    Hand-Gesture Based Programming of Industrial Robot Manipulators

    Get PDF
    Nowadays, industrial robot manipulators and manufacturing processes are associated as never before. Robot manipulators execute repetitive tasks with increased accuracy and speed, features necessary for industries with needs for manufacturing of products in large quantities by reducing the production time. Although robot manipulators have a significant role for the enhancement of productivity within industries, the programming process of the robot manipulators is an important drawback. Traditional programming methodologies requires robot programming experts and are time consuming. This thesis work aims to develop an application for programming industrial robot manipulators excluding the need of traditional programing methodologies exploiting the intuitiveness of humans’ hands’ gestures. The development of input devices for intuitive Human-Machine Interactions provides the possibility to capture such gestures. Hence, the need of the need of robot manipulator programming experts can be replaced by task experts. In addition, the integration of intuitive means of interaction can reduce be also reduced. The components to capture the hands’ operators’ gestures are a data glove and a precise hand-tracking device. The robot manipulator imitates the motion that human operator performs with the hand, in terms of position. Inverse kinematics are applied to enhance the programming of robot manipulators in-dependently of their structure and manufacturer and researching the possibility for optimizing the programmed robot paths. Finally, a Human-Machine Interface contributes in the programming process by offering important information for the programming process and the status of the integrated components

    The design and evaluation of an ergonomic contactless gesture control system for industrial robots

    Get PDF
    In industrial human-robot collaboration, variability commonly exists in the operation environment and the components, which induces uncertainty and error that require frequent manual intervention for rectification. Conventional teach pendants can be physically demanding to use and require user training prior to operation. Thus, a more effective control interface is required. In this paper, the design and evaluation of a contactless gesture control system using Leap Motion is described. The design process involves the use of RULA human factor analysis tool. Separately, an exploratory usability test was conducted to compare three usability aspects between the developed gesture control system and an off-the-shelf conventional touchscreen teach pendant. This paper focuses on the user-centred design methodology of the gesture control system. The novelties of this research are the use of human factor analysis tools in the human-centred development process, as well as the gesture control design that enable users to control industrial robot’s motion by its joints and tool centre point position. The system has potential to use as an input device for industrial robot control in a human-robot collaboration scene. The developed gesture control system was targeting applications in system recovery and error correction in flexible manufacturing environment shared between humans and robots. The system allows operators to control an industrial robot without the requirement of significant training

    Automatic Differentiation of Rigid Body Dynamics for Optimal Control and Estimation

    Full text link
    Many algorithms for control, optimization and estimation in robotics depend on derivatives of the underlying system dynamics, e.g. to compute linearizations, sensitivities or gradient directions. However, we show that when dealing with Rigid Body Dynamics, these derivatives are difficult to derive analytically and to implement efficiently. To overcome this issue, we extend the modelling tool `RobCoGen' to be compatible with Automatic Differentiation. Additionally, we propose how to automatically obtain the derivatives and generate highly efficient source code. We highlight the flexibility and performance of the approach in two application examples. First, we show a Trajectory Optimization example for the quadrupedal robot HyQ, which employs auto-differentiation on the dynamics including a contact model. Second, we present a hardware experiment in which a 6 DoF robotic arm avoids a randomly moving obstacle in a go-to task by fast, dynamic replanning
    • …
    corecore