4,101 research outputs found
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE
Human-Machine Interface for Remote Training of Robot Tasks
Regardless of their industrial or research application, the streamlining of
robot operations is limited by the proximity of experienced users to the actual
hardware. Be it massive open online robotics courses, crowd-sourcing of robot
task training, or remote research on massive robot farms for machine learning,
the need to create an apt remote Human-Machine Interface is quite prevalent.
The paper at hand proposes a novel solution to the programming/training of
remote robots employing an intuitive and accurate user-interface which offers
all the benefits of working with real robots without imposing delays and
inefficiency. The system includes: a vision-based 3D hand detection and gesture
recognition subsystem, a simulated digital twin of a robot as visual feedback,
and the "remote" robot learning/executing trajectories using dynamic motion
primitives. Our results indicate that the system is a promising solution to the
problem of remote training of robot tasks.Comment: Accepted in IEEE International Conference on Imaging Systems and
Techniques - IST201
Learning to Navigate Cloth using Haptics
We present a controller that allows an arm-like manipulator to navigate
deformable cloth garments in simulation through the use of haptic information.
The main challenge of such a controller is to avoid getting tangled in, tearing
or punching through the deforming cloth. Our controller aggregates force
information from a number of haptic-sensing spheres all along the manipulator
for guidance. Based on haptic forces, each individual sphere updates its target
location, and the conflicts that arise between this set of desired positions is
resolved by solving an inverse kinematic problem with constraints.
Reinforcement learning is used to train the controller for a single
haptic-sensing sphere, where a training run is terminated (and thus penalized)
when large forces are detected due to contact between the sphere and a
simplified model of the cloth. In simulation, we demonstrate successful
navigation of a robotic arm through a variety of garments, including an
isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two
baseline controllers: one without haptics and another that was trained based on
large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A.
Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
An intelligent, free-flying robot
The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base
Man-machine cooperation in advanced teleoperation
Teleoperation experiments at JPL have shown that advanced features in a telerobotic system are a necessary condition for good results, but that they are not sufficient to assure consistently good performance by the operators. Two or three operators are normally used during training and experiments to maintain the desired performance. An alternative to this multi-operator control station is a man-machine interface embedding computer programs that can perform some of the operator's functions. In this paper we present our first experiments with these concepts, in which we focused on the areas of real-time task monitoring and interactive path planning. In the first case, when performing a known task, the operator has an automatic aid for setting control parameters and camera views. In the second case, an interactive path planner will rank different path alternatives so that the operator will make the correct control decision. The monitoring function has been implemented with a neural network doing the real-time task segmentation. The interactive path planner was implemented for redundant manipulators to specify arm configurations across the desired path and satisfy geometric, task, and performance constraints
Trajectory Deformations from Physical Human-Robot Interaction
Robots are finding new applications where physical interaction with a human
is necessary: manufacturing, healthcare, and social tasks. Accordingly, the
field of physical human-robot interaction (pHRI) has leveraged impedance
control approaches, which support compliant interactions between human and
robot. However, a limitation of traditional impedance control is that---despite
provisions for the human to modify the robot's current trajectory---the human
cannot affect the robot's future desired trajectory through pHRI. In this
paper, we present an algorithm for physically interactive trajectory
deformations which, when combined with impedance control, allows the human to
modulate both the actual and desired trajectories of the robot. Unlike related
works, our method explicitly deforms the future desired trajectory based on
forces applied during pHRI, but does not require constant human guidance. We
present our approach and verify that this method is compatible with traditional
impedance control. Next, we use constrained optimization to derive the
deformation shape. Finally, we describe an algorithm for real time
implementation, and perform simulations to test the arbitration parameters.
Experimental results demonstrate reduction in the human's effort and
improvement in the movement quality when compared to pHRI with impedance
control alone
A Framework to Illustrate Kinematic Behavior of Mechanisms by Haptic Feedback
The kinematic properties of mechanisms are well known by the researchers and
teachers. The theory based on the study of Jacobian matrices allows us to
explain, for example, the singular configuration. However, in many cases, the
physical sense of such properties is difficult to explain to students. The aim
of this article is to use haptic feedback to render to the user the
signification of different kinematic indices. The framework uses a Phantom Omni
and a serial and parallel mechanism with two degrees of freedom. The
end-effector of both mechanisms can be moved either by classical mouse, or
Phantom Omni with or without feedback
Space robotics: Recent accomplishments and opportunities for future research
The Langley Guidance, Navigation, and Control Technical Committee (GNCTC) was one of six technical committees created in 1991 by the Chief Scientist, Dr. Michael F. Card. During the kickoff meeting Dr. Card charged the chairmen to: (1) establish a cross-Center committee; (2) support at least one workshop in a selected discipline; and (3) prepare a technical paper on recent accomplishments in the discipline and on opportunities for future research. The Guidance, Navigation, and Control Committee was formed and selected for focus on the discipline of Space robotics. This report is a summary of the committee's assessment of recent accomplishments and opportunities for future research. The report is organized as follows. First is an overview of the data sources used by the committee. Next is a description of technical needs identified by the committee followed by recent accomplishments. Opportunities for future research ends the main body of the report. It includes the primary recommendation of the committee that NASA establish a national space facility for the development of space automation and robotics, one element of which is a telerobotic research platform in space. References 1 and 2 are the proceedings of two workshops sponsored by the committee during its June 1991, through May 1992 term. The focus of the committee for the June 1992 - May 1993 term will be to further define to the recommended platform in space and to add an additional discipline which includes aircraft related GN&C issues. To the latter end members performing aircraft related research will be added to the committee. (A preliminary assessment of future opportunities in aircraft-related GN&C research has been included as appendix A.
Remote lab of robotic manipulators through an open access ROS-based platform
The research, training, and learning in robotic
systems is a difficult task for institutions that do not have
an appropriate equipment infrastructure, mainly due to the
high investment required to acquire these systems. Possible
alternatives are the use of robotic simulation platforms and the
creation of remote robotic environments available for different
users. The use of the last option surpasses the former one in
terms of the possibility to handle real robotic systems during
the training process. However, technical challenges appear in
the management of the supporting infrastructure to use the
robotic systems, namely in terms of access, safety, security,
communication, and programming aspects. Having this in mind,
this paper presents an approach for the remote operation of real
robotic manipulators under a virtual robotics laboratory. To this
end, an open access and safe web-based platform was developed
for the remote control of robotic manipulators, being validated
through the remote control of a real UR3 manipulator. This platform
contributes to the research and training in robotic systems
among different research centers and educational institutions that
have limited access to these technologies. Furthermore, students
and researchers can use this educational tool that differs from
traditional robotic simulators through a virtual experience that
connects real manipulators worldwide through the Internet.The authors are grateful to the Foundation for Science
and Technology (FCT, Portugal) for financial support
through national funds FCT/MCTES (PIDDAC) to CeDRI
(UIDB/05757/2020 and UIDP/05757/2020), and SusTEC
(LA/P/0007/2021).info:eu-repo/semantics/publishedVersio
- …