1,721 research outputs found

    Piano-Playing Robotic Arm

    Get PDF
    This project explores the intersection of robotics and technology with music. The study looks specifically at the expressive aspects of a human performer and how to translate and represent that in a robotic system. By analyzing live performances of multiple performers and talking with professional performers, the team built an understanding of human gesture in performance. This was demonstrated by the creation of a robotic piano-playing system, including an industrial arm and custom-built hand. To fully understand the interpretation and performance of music, the team implemented a set of machine learning techniques, which included training a recurrent neural network (RNN) to analyze audio signals and reproduce musical input

    Manipulation Planning for Forceful Human-Robot-Collaboration

    Get PDF
    This thesis addresses the problem of manipulation planning for forceful human-robot collaboration. Particularly, the focus is on the scenario where a human applies a sequence of changing external forces through forceful operations (e.g. cutting a circular piece off a board) on an object that is grasped by a cooperative robot. We present a range of planners that 1) enable the robot to stabilize and position the object under the human applied forces by exploiting supports from both the object-robot and object-environment contacts; 2) improve task efficiency by minimizing the need of configuration and grasp changes required by the changing external forces; 3) improve human comfort during the forceful interaction by optimizing the defined comfort criteria. We first focus on the instance of using only robotic grasps, where the robot is supposed to grasp/regrasp the object multiple times to keep it stable under the changing external forces. We introduce a planner that can generate an efficient manipulation plan by intelligently deciding when the robot should change its grasp on the object as the human applies the forces, and choosing subsequent grasps such that they minimize the number of regrasps required in the long-term. The planner searches for such an efficient plan by first finding a minimal sequence of grasp configurations that are able to keep the object stable under the changing forces, and then generating connecting trajectories to switch between the planned configurations, i.e. planning regrasps. We perform the search for such a grasp (configuration) sequence by sampling stable configurations for the external forces, building an operation graph using these stable configurations and then searching the operation graph to minimize the number of regrasps. We solve the problem of bimanual regrasp planning under the assumption of no support surface, enabling the robot to regrasp an object in the air by finding intermediate configurations at which both the bimanual and unimanual grasps can hold the object stable under gravity. We present a variety of experiments to show the performance of our planner, particularly in minimizing the number of regrasps for forceful manipulation tasks and planning stable regrasps. We then explore the problem of using both the object-environment contacts and object-robot contacts, which enlarges the set of stable configurations and thus boosts the robot’s capability in stabilizing the object under external forces. We present a planner that can intelligently exploit the environment’s and robot’s stabilization capabilities within a unified planning framework to search for a minimal number of stable contact configurations. A big computational bottleneck in this planner is due to the static stability analysis of a large number of candidate configurations. We introduce a containment relation between different contact configurations, to efficiently prune the stability checking process. We present a set of real-robot and simulated experiments illustrating the effectiveness of the proposed framework. We present a detailed analysis of the proposed containment relationship, particularly in improving the planning efficiency. We present a planning algorithm to further improve the cooperative robot behaviour concerning human comfort during the forceful human-robot interaction. Particularly, we are interested in empowering the robot with the capability of grasping and positioning the object not only to ensure the object stability against the human applied forces, but also to improve human experience and comfort during the interaction. We address human comfort as the muscular activation level required to apply a desired external force, together with the human spatial perception, i.e. the so-called peripersonal-space comfort during the interaction. We propose to maximize both comfort metrics to optimize the robot and object configuration such that the human can apply a forceful operation comfortably. We present a set of human-robot drilling and cutting experiments which verify the efficiency of the proposed metrics in improving the overall comfort and HRI experience, without compromising the force stability. In addition to the above planning work, we present a conic formulation to approximate the distribution of a forceful operation in the wrench space with a polyhedral cone, which enables the planner to efficiently assess the stability of a system configuration even in the presence of force uncertainties that are inherent in the human applied forceful operations. We also develop a graphical user interface, which human users can easily use to specify various forceful tasks, i.e. sequences of forceful operations on selected objects, in an interactive manner. The user interface ties in human task specification, on-demand manipulation planning and robot-assisted fabrication together. We present a set of human-robot experiments using the interface demonstrating the feasibility of our system. In short, in this thesis we present a series of planners for object manipulation under changing external forces. We show the object contacts with the robot and the environment enable the robot to manipulate an object under external forces, while making the most of the object contacts has the potential to eliminate redundant changes during manipulation, e.g. regrasp, and thus improve task efficiency and smoothness. We also show the necessity of optimizing human comfort in planning for forceful human-robot manipulation tasks. We believe the work presented here can be a key component in a human-robot collaboration framework

    Design and test of a Displacement Workspace Mapping Station for articular joints

    Get PDF
    In 2003, 267,000 Americans received total knee replacements prohibiting high impact athletics for the remainder of a patient’s life. A better understanding of the movement and constraint of the knee is necessary to provide more realistic motion of or possibly eliminate the need for joint prosthetics. Fixed Orientation Displacement Workspaces (FODW) can be applied to study the relationship of the passive constraint system and six (6) degree of freedom (DOF) movement of the human knee. A FODW consists of the volume of possible positions the tibia/fibula can occupy relative to a fixed femur without changing the relative orientation of the bones. Theoretical models of the FODW provided a promising snapshot of knee kinematics. A Displacement Workspace Test Station (DWTS) for mapping FODWs was built. An in vitro articular joint completes the loop between a strain gauge-based six (6) axis load cell and a 6 DOF manipulandum mounted to a fixed reference frame. The joint is hand manipulated while a C++ program, Armtalk, operates applications that sample and filter both manipulandum position/orientation and load cell output signals at over 500Hz. Armtalk automatically stores raw data points at 2 Hz or upon a user foot-pedal signal. Forces and moments acting at the joint and its angular orientation are added to each raw data point by algorithms in a spreadsheet. The algorithms select points that represent a particular FODW according to a user specified range of acceptable joint forces and moments and bone orientations. The Cartesian coordinates of individual FODW data points are input into a NURBS-based CAD program for visualization. The DWTS has a 0.2286 mm positional accuracy, a 200 N capacity, and a 0.075 mm/kN compliance. A 2 DOF test checked the Armtalk application and calculated the DWTS angular accuracy to be 0.008°. To calibrate the load cell, moment and force scaling factors of 0.00922 in lb/unit and 0.00554 lb/unit were calculated, respectively. The spreadsheet algorithms successfully reduced data in a 6 DOF test. The CAD program modeled workspaces from 2 and 6 DOF tests with a 1.3 % volumetric accuracy. The apparatus is ready to map FODW of articular joints

    Intuitive Instruction of Industrial Robots : A Knowledge-Based Approach

    Get PDF
    With more advanced manufacturing technologies, small and medium sized enterprises can compete with low-wage labor by providing customized and high quality products. For small production series, robotic systems can provide a cost-effective solution. However, for robots to be able to perform on par with human workers in manufacturing industries, they must become flexible and autonomous in their task execution and swift and easy to instruct. This will enable small businesses with short production series or highly customized products to use robot coworkers without consulting expert robot programmers. The objective of this thesis is to explore programming solutions that can reduce the programming effort of sensor-controlled robot tasks. The robot motions are expressed using constraints, and multiple of simple constrained motions can be combined into a robot skill. The skill can be stored in a knowledge base together with a semantic description, which enables reuse and reasoning. The main contributions of the thesis are 1) development of ontologies for knowledge about robot devices and skills, 2) a user interface that provides simple programming of dual-arm skills for non-experts and experts, 3) a programming interface for task descriptions in unstructured natural language in a user-specified vocabulary and 4) an implementation where low-level code is generated from the high-level descriptions. The resulting system greatly reduces the number of parameters exposed to the user, is simple to use for non-experts and reduces the programming time for experts by 80%. The representation is described on a semantic level, which means that the same skill can be used on different robot platforms. The research is presented in seven papers, the first describing the knowledge representation and the second the knowledge-based architecture that enables skill sharing between robots. The third paper presents the translation from high-level instructions to low-level code for force-controlled motions. The two following papers evaluate the simplified programming prototype for non-expert and expert users. The last two present how program statements are extracted from unstructured natural language descriptions
    • …
    corecore