186 research outputs found

    Working and Learning with Knowledge in the Lobes of a Humanoid's Mind

    Get PDF
    Humanoid class robots must have sufficient dexterity to assist people and work in an environment designed for human comfort and productivity. This dexterity, in particular the ability to use tools, requires a cognitive understanding of self and the world that exceeds contemporary robotics. Our hypothesis is that the sense-think-act paradigm that has proven so successful for autonomous robots is missing one or more key elements that will be needed for humanoids to meet their full potential as autonomous human assistants. This key ingredient is knowledge. The presented work includes experiments conducted on the Robonaut system, a NASA and the Defense Advanced research Projects Agency (DARPA) joint project, and includes collaborative efforts with a DARPA Mobile Autonomous Robot Software technical program team of researchers at NASA, MIT, USC, NRL, UMass and Vanderbilt. The paper reports on results in the areas of human-robot interaction (human tracking, gesture recognition, natural language, supervised control), perception (stereo vision, object identification, object pose estimation), autonomous grasping (tactile sensing, grasp reflex, grasp stability) and learning (human instruction, task level sequences, and sensorimotor association)

    Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Get PDF
    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose

    Advancing Robotic Control for Space Exploration Using Robonaut 2

    Get PDF
    Robonaut 2, or R2, arrived on the International Space Station (ISS) in February 2011 and is currently being tested in preparation for its role initially as an Intra-Vehicular Activity (IVA) tool and eventually as a robot that performs Extra-Vehicular Activities (EVA). Robonaut 2, is a state of the art dexterous anthropomorphic robotic torso designed for assisting astronauts. R2 features increased force sensing, greater range of motion, higher bandwidth, and improved dexterity over its predecessor. Robonaut 2 is unique in its ability to safely allow humans in its workspace and to perform significant tasks in a workspace designed for humans. The current operational paradigm involves either the crew or the ground control team running semi-autonomous scripts on the robot as both the astronaut and the ground team monitor R2 and the data it produces. While this is appropriate for the check-out phase of operations, the future plans for R2 will stress the current operational framework. The approach described here will outline a suite of operational modes that will be developed for Robonaut 2. These operational modes include teleoperation, shared control, directed autonomy, and supervised autonomy, and they cover a spectrum of human involvement in controlling R2

    Modeling and Classifying Six-Dimensional Trajectories for Teleoperation Under a Time Delay

    Get PDF
    Within the context of teleoperating the JSC Robonaut humanoid robot under 2-10 second time delays, this paper explores the technical problem of modeling and classifying human motions represented as six-dimensional (position and orientation) trajectories. A dual path research agenda is reviewed which explored both deterministic approaches and stochastic approaches using Hidden Markov Models. Finally, recent results are shown from a new model which represents the fusion of these two research paths. Questions are also raised about the possibility of automatically generating autonomous actions by reusing the same predictive models of human behavior to be the source of autonomous control. This approach changes the role of teleoperation from being a stand-in for autonomy into the first data collection step for developing generative models capable of autonomous control of the robot

    Anthropomorphic Robot Design and User Interaction Associated with Motion

    Get PDF
    Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement

    Human factors and telerobotics : tools and approaches for designing remote robotic workstation displays

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, February 2002.Includes bibliographical references (v. 2, leaves 297-300).A methodology is created for designing and testing an intuitive synthesized telerobotic workstation display configuration for controlling a high degree of freedom dexterous manipulator for use on the International Space Station. With the construction and maintenance of the International Space Station, the number of Extravehicular Activity (EVA) hours is expected to increase by a factor of four over the current Space Shuttle missions, resulting in higher demands on the EVA crewmembers and EVA crew systems. One approach to utilizing EVA resources more effectively while increasing crew safety and efficiency is to perform routine and high-risk EVA tasks telerobotically. NASA's Johnson Space Center is developing the state-of-the-art dexterous robotic manipulator. An anthropomorphic telerobot called Robonaut is being constructed that is capable of performing all of the tasks required of an EVA suited crewmember. Robonaut is comparable in size to a suited crewmember and consists of two 7 DOF arms, two 12 DOF hands, a 6+ DOF "stinger tail", and a 2+ DOF stereo camera platform. Current robotic workstations are insufficient for controlling highly dexterous manipulators, which require full immersion operator telepresence. The Robonaut workstation must be designed to allow an operator to intuitively control numerous degrees of freedom simultaneously, in varying levels of supervisory control and for all types of EVA tasks. This effort critically reviewed previous research into areas including telerobotic interfaces, human-machine interactions, microgravity physiology, supervisory control, force feedback, virtual reality, and manual control.(cont.) A methodology is developed for designing and evaluating integrated interfaces for highly dexterous and multi-functional telerobots. In addition a classification of telerobotic tasks is proposed. Experiments were conducted with subjects performing EVA tasks with Space Station hardware using Robonaut and a Robonaut simulation (also under development). Results indicate that Robonaut simulation subject performance matches Robonaut performance. The simulation can be used for training operators for full-immersion teleoperation and for developing and evaluating future telerobotic workstations. A baseline amount of Situation Awareness time was determined and reduced using the display design iteration.by Jennifer Lisa Rochlis.Ph.D

    Graphite immobilisation in glass composite materials

    Get PDF
    Irradiated graphite is a problematic nuclear waste stream and currently raises significant concern worldwide in identifying its long-term disposal route. This thesis describes the use of glass materials for the immobilisation of irradiated graphite prepared by microwave, conventional and sparks plasma sintering methods. Several potential glass compositions namely iron phosphate, aluminoborosilicate, calcium aluminosilicate, alkali borosilicate and obsidian were considered for the immobilisation of various loadings of graphite simulating irradiated graphite. The properties of the samples produced using different processing methods are compared selectively. An investigation of microwave processing using an iron phosphate glass composition revealed that full reaction of the raw materials and formation of a glass melt occurs with consequent removal of porosity at 8 minutes microwave processing. When graphite is present, iron phosphate crystalline phases are formed with much higher levels of residual porosity of up to 43 % than in the samples prepared using conventional sintering under argon. It is found that graphite reacts with the microwave field when in powder form but this reaction is minimised when the graphite is incorporated into a pellet, and that the graphite also impedes sintering of the glass. Mössbauer spectroscopy indicates that reduction of iron occurs with concomitant graphite oxidation. The production of graphite-glass samples using various powdered glass compositions by conventional sintering method still resulted in high porosity with an average of 6-17 % for graphite loadings of 20-25 wt%. Due to the use of pre-made glasses and controlled sintering parameters, the loss of graphite from the total mass is reduced compared to the microwaved samples; the average mass loss is < 0.8 %. The complication of iron oxidation and reduction is present in all the iron containing base glasses considered and this increases the total porosity of the graphite-glass samples. It is concluded that the presence of iron in the raw materials or base glasses as an encapsulation media for the immobilisation of the irradiated graphite waste is not advisable. The production of glass and graphite-glass samples based calcium aluminosilicate composition by spark plasma sintering method is found highly suitable for the immobilisation of irradiated graphite wastes. The advantages of the method includes short processing time i.e. < 40 minutes, improved sintering transport mechanisms, limited graphite oxidation, low porosity (1-4 %) and acceptable tensile strength (2-7 MPa). The most promising samples prepared using spark plasma sintering method were loaded with 30-50 wt% graphite

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Teaching a robot manipulation skills through demonstration

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2004.Includes bibliographical references (p. 127-129).An automated software system has been developed to allow robots to learn a generalized motor skill from demonstrations given by a human operator. Data is captured using a teleoperation suit as a task is performed repeatedly on Leonardo, the Robotic Life group's anthropomorphic robot, in different parts of his workspace. Stereo vision and tactile feedback data are also captured. Joint and end effector motions are measured through time, and an improved Mean Squared Velocity [MSV] analysis is performed to segment motions into possible goal-directed streams. Further combinatorial selection of subsets of markers allows final episodic boundary selection and time alignment of tasks. The task trials are then analyzed spatially using radial basis functions [RBFs] to interpolate demonstrations to span his workspace, using the object position as the motion blending parameter. An analysis of the motions in the object coordinate space [with the origin defined at the object] and absolute world-coordinate space [with the origin defined at the base of the robot], and motion variances in both coordinate frames, leads to a measure [referred to here as objectivity] of how much any part of an action is absolutely oriented, and how much is object-based. A secondary RBF solution, using end effector paths in the object coordinate frame, provides precise end-effector positioning relative to the object. The objectivity measure is used to blend between these two solutions, using the initial RBF solution to preserve quality of motion, and the secondary end-effector objective RBF solution to increase the robot's capability to engage objects accurately and robustly.by Jeff Lieberman.S.M
    • …
    corecore