7,458 research outputs found
Stanford Aerospace Research Laboratory research overview
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator
Space robot simulator vehicle
A Space Robot Simulator Vehicle (SRSV) was constructed to model a free-flying robot capable of doing construction, manipulation and repair work in space. The SRSV is intended as a test bed for development of dynamic and static control methods for space robots. The vehicle is built around a two-foot-diameter air-cushion vehicle that carries batteries, power supplies, gas tanks, computer, reaction jets and radio equipment. It is fitted with one or two two-link manipulators, which may be of many possible designs, including flexible-link versions. Both the vehicle body and its first arm are nearly complete. Inverse dynamic control of the robot's manipulator has been successfully simulated using equations generated by the dynamic simulation package SDEXACT. In this mode, the position of the manipulator tip is controlled not by fixing the vehicle base through thruster operation, but by controlling the manipulator joint torques to achieve the desired tip motion, while allowing for the free motion of the vehicle base. One of the primary goals is to minimize use of the thrusters in favor of intelligent control of the manipulator. Ways to reduce the computational burden of control are described
Development of a micromanipulation system with force sensing
This article provides in-depth knowledge about our undergoing effort to develop an open architecture micromanipulation system with force sensing capabilities. The major requirement to perform any micromanipulation task effectively is to ensure the controlled motion of actuators within nanometer accuracy with low overshoot even under the influence of disturbances. Moreover, to achieve high dexterity in manipulation, control of the interaction forces is required. In micromanipulation, control of interaction forces necessitates force sensing in milli-Newton range with nano-Newton resolution. In this paper, we present a position controller based on a discrete time sliding mode control architecture along with a disturbance observer. Experimental verifications for this controller are demonstrated for 100, 50 and 10 nanometer step inputs applied to PZT stages. Our results indicate that position tracking accuracies up to 10 nanometers, without any overshoot and low steady state error are achievable. Furthermore, the paper includes experimental verification of force sensing within nano-Newton resolution using a piezoresistive cantilever endeffector. Experimental results are compared to the theoretical estimates of the change in attractive forces as a function of decreasing distance and of the pull off force between a silicon tip and a glass surface, respectively. Good agreement among the experimental data and the theoretical estimates has been demonstrated
Autonomous Mechanical Assembly on the Space Shuttle: An Overview
The space shuttle will be equipped with a pair of 50 ft. manipulators used to handle payloads and to perform mechanical assembly operations. Although current plans call for these manipulators to be operated by a human teleoperator. The possibility of using results from robotics and machine intelligence to automate this shuttle assembly system was investigated. The major components of an autonomous mechanical assembly system are examined, along with the technology base upon which they depend. The state of the art in advanced automation is also assessed
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the
immediately accessible parts of its environment, over the course of a few
months creating a model of peripersonal space useful for reaching and grasping
objects around it. Drawing on constraints from the empirical literature on
infant behavior, we present a preliminary computational model of this learning
process, implemented and evaluated on a physical robot. The learning agent
explores the relationship between the configuration space of the arm, sensing
joint angles through proprioception, and its visual perceptions of the hand and
grippers. The resulting knowledge is represented as the peripersonal space
(PPS) graph, where nodes represent states of the arm, edges represent safe
movements, and paths represent safe trajectories from one pose to another. In
our model, the learning process is driven by intrinsic motivation. When
repeatedly performing an action, the agent learns the typical result, but also
detects unusual outcomes, and is motivated to learn how to make those unusual
results reliable. Arm motions typically leave the static background unchanged,
but occasionally bump an object, changing its static position. The reach action
is learned as a reliable way to bump and move an object in the environment.
Similarly, once a reliable reach action is learned, it typically makes a
quasi-static change in the environment, moving an object from one static
position to another. The unusual outcome is that the object is accidentally
grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically
with the hand. Learning to make grasps reliable is more complex than for
reaches, but we demonstrate significant progress. Our current results are steps
toward autonomous sensorimotor learning of motion, reaching, and grasping in
peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure
Recommended from our members
Integration of vision and force sensors for grasping
This paper describes a set of methods that can be used to integrate real-time external vision sensing with internal force and position sensing to estimate contact forces by the fingers of a hand. Estimating these forces and contacts is essential to performing dextrous manipulation tasks. Most robotic hands are either sensorless or lack the ability to accurately and robustly report position and force information relating to contact. By adding external vision sensing, we can complement any internal sensors to more accurately estimate forces and contact positions. Experiments are described that use real-time visual trackers in conjunction with internal strain gauges and a new tactile sensor to accurately estimate finger contacts and applied forces for a three fingered robotic hand
Scaled bilateral teleoperation using discrete-time sliding mode controller
In this paper, the design of a discrete-time slidingmode
controller based on Lyapunov theory is presented along
with a robust disturbance observer and is applied to a piezostage
for high-precision motion. A linear model of a piezostage was
used with nominal parameters to compensate the disturbance
acting on the system in order to achieve nanometer accuracy. The
effectiveness of the controller and disturbance observer is validated
in terms of closed-loop position performance for nanometer
references. The control structure has been applied to a scaled
bilateral structure for the custom-built telemicromanipulation
setup. A piezoresistive atomic force microscope cantilever with a
built-in Wheatstone bridge is utilized to achieve the nanonewtonlevel
interaction forces between the piezoresistive probe tip and
the environment. Experimental results are provided for the
nanonewton-range force sensing, and good agreement between
the experimental data and the theoretical estimates has been
demonstrated. Force/position tracking and transparency between
the master and the slave has been clearly demonstrated after
necessary scalin
Uncalibrated Dynamic Mechanical System Controller
An apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s) is provided. In the preferred embodiment, tracking of a moving target by a robot having multiple degrees of freedom is achieved using an uncalibrated model independent visual servo control. Model independent visual servo control is defined as using visual feedback to control a robot's servomotors without a precisely calibrated kinematic robot model or camera model. A processor updates a Jacobian and a controller provides control signals such that the robot's end effector is directed to a desired location relative to a target on a workpiece.Georgia Tech Research Corporatio
A Sonomyography-based Muscle Computer Interface for Individuals with Spinal Cord Injury
Impairment of hand functions in individuals with spinal cord injury (SCI)
severely disrupts activities of daily living. Recent advances have enabled
rehabilitation assisted by robotic devices to augment the residual function of
the muscles. Traditionally, non-invasive electromyography-based peripheral
neural interfaces have been utilized to sense volitional motor intent to drive
robotic assistive devices. However, the dexterity and fidelity of control that
can be achieved with electromyography-based control have been limited due to
inherent limitations in signal quality. We have developed and tested a
muscle-computer interface (MCI) utilizing sonomyography to provide control of a
virtual cursor for individuals with motor-incomplete spinal cord injury. We
demonstrate that individuals with SCI successfully gained control of a virtual
cursor by utilizing contractions of muscles of the wrist joint. The
sonomyography-based interface enabled control of the cursor at multiple graded
levels demonstrating the ability to achieve accurate and stable endpoint
control. Our sonomyography-based muscle-computer interface can enable dexterous
control of upper-extremity assistive devices for individuals with
motor-incomplete SCI
- …