11,058 research outputs found

    Hectospec, the MMT's 300 Optical Fiber-Fed Spectrograph

    Full text link
    The Hectospec is a 300 optical fiber fed spectrograph commissioned at the MMT in the spring of 2004. A pair of high-speed six-axis robots move the 300 fiber buttons between observing configurations within ~300 s and to an accuracy ~25 microns. The optical fibers run for 26 m between the MMT's focal surface and the bench spectrograph operating at R~1000-2000. Another high dispersion bench spectrograph offering R~5,000, Hectochelle, is also available. The system throughput, including all losses in the telescope optics, fibers, and spectrograph peaks at ~10% at the grating blaze in 1" FWHM seeing. Correcting for aperture losses at the 1.5" diameter fiber entrance aperture, the system throughput peaks at \sim17%. Hectospec has proven to be a workhorse instrument at the MMT. Hectospec and Hectochelle together were scheduled for 1/3 of the available nights since its commissioning. Hectospec has returned \~60,000 reduced spectra for 16 scientific programs during its first year of operation.Comment: 68 pages, 28 figures, to appear in December 2005 PAS

    An intelligent real time 3D vision system for robotic welding tasks

    Get PDF
    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks

    Improving the Accuracy of Industrial Robots by offline Compensation of Joints Errors

    Get PDF
    The use of industrial robots in many fields of industry like prototyping, pre-machining and end milling is limited because of their poor accuracy. Robot joints are mainly responsible for this poor accuracy. The flexibility of robots joints and the kinematic errors in the transmission systems produce a significant error of position in the level of the end-effector. This paper presents these two types of joint errors. Identification methods are presented with experimental validation on a 6 axes industrial robot, STAUBLI RX 170 BH. An offline correction method used to improve the accuracy of this robot is validated experimentally

    Reliable vision-guided grasping

    Get PDF
    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system

    Capaciflector-based virtual force control and centering

    Get PDF
    This report presents a novel concept of force control, called virtual force control. The virtual force concept avoids sudden step transition of position control to contact force control resulting in contact force disturbance when a robot end-effector makes contact with the environment. A virtual force/position control scheme consists of two loops: the force control loop and the position control loop. While the position control loop regulates the free motion, the force control loop regulates the contact force after making contact with the environment and the virtual force measured by a range sensor called capaciflector in the virtual environment. After presenting the concept of virtual force control, the report introduces a centering scheme in which the virtual force controller is employed to measure three points on a cone so that its center can be located. Experimental results of a one-degree-of-freedom virtual force control scheme applied in berthing an orbital replaceable unit are reported and compared with those of conventional pure contact force control cases

    Development of robots and application to industrial processes

    Get PDF
    An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction
    corecore