152 research outputs found

    Video guidance sensor for autonomous capture

    Get PDF
    A video-based sensor has been developed specifically for the close-range maneuvering required in the last phase of autonomous rendezvous and capture. The system is a combination of target and sensor, with the target being a modified version of the standard target used by the astronauts with the Remote Manipulator System (RMS). The system, as currently configured, works well for autonomous docking maneuvers from approximately forty feet in to soft-docking and capture. The sensor was developed specifically to track and calculate its position and attitude relative to a target consisting of three retro-reflective spots, equally spaced, with the center spot being on a pole. This target configuration was chosen for its sensitivity to small amounts of relative pitch and yaw and because it could be used with a small modification to the standard RMS target already in use by NASA

    Design and fabrication of an autonomous rendezvous and docking sensor using off-the-shelf hardware

    Get PDF
    NASA Marshall Space Flight Center (MSFC) has developed and tested an engineering model of an automated rendezvous and docking sensor system composed of a video camera ringed with laser diodes at two wavelengths and a standard remote manipulator system target that has been modified with retro-reflective tape and 830 and 780 mm optical filters. TRW has provided additional engineering analysis, design, and manufacturing support, resulting in a robust, low cost, automated rendezvous and docking sensor design. We have addressed the issue of space qualification using off-the-shelf hardware components. We have also addressed the performance problems of increased signal to noise ratio, increased range, increased frame rate, graceful degradation through component redundancy, and improved range calibration. Next year, we will build a breadboard of this sensor. The phenomenology of the background scene of a target vehicle as viewed against earth and space backgrounds under various lighting conditions will be simulated using the TRW Dynamic Scene Generator Facility (DSGF). Solar illumination angles of the target vehicle and candidate docking target ranging from eclipse to full sun will be explored. The sensor will be transportable for testing at the MSFC Flight Robotics Laboratory (EB24) using the Dynamic Overhead Telerobotic Simulator (DOTS)

    Autoguidance video sensor for docking

    Get PDF
    The Automated Rendezvous and Docking system (ARAD) is composed of two parts. The first part is the sensor which consists of a video camera ringed with two wavelengths of laser diode. The second part is a standard Remote Manipulator System (RMS) target used on the Orbiter that has been modified with three circular pieces of retro-reflective tape covered by optical filters which correspond to one of the wavelengths of laser diode. The sensor is on the chase vehicle and the target is on the target vehicle. The ARAD system works by pulsing one wavelength laser diodes and taking a picture. Then the second wavelength laser diodes are pulsed and a second picture is taken. One picture is subtracted from the other and the resultant picture is thresholded. All adjacent pixels above threshold are blobbed together (X and Y centroids calculated). All blob centroids are checked to recognize the target out of noise. Then the three target spots are windowed and tracked. The three target spot centroids are used to evaluate the roll, yaw, pitch, range, azimuth, and elevation. From that a guidance routine can guide the chase vehicle to dock with the target vehicle with the correct orientation

    Video Guidance Sensors Using Remotely Activated Targets

    Get PDF
    Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle

    Control Software for Advanced Video Guidance Sensor

    Get PDF
    Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command

    Laser Range and Bearing Finder with No Moving Parts

    Get PDF
    A proposed laser-based instrument would quickly measure the approximate distance and approximate direction to the closest target within its field of view. The instrument would not contain any moving parts and its mode of operation would not entail scanning over of its field of view. Typically, the instrument would be used to locate a target at a distance on the order of meters to kilometers. The instrument would be best suited for use in an uncluttered setting in which the target is the only or, at worst, the closest object in the vicinity; for example, it could be used aboard an aircraft to detect and track another aircraft flying nearby. The proposed instrument would include a conventional time-of-flight or echo-phase-shift laser range finder, but unlike most other range finders, this one would not generate a narrow cylindrical laser beam; instead, it would generate a conical laser beam spanning the field of view. The instrument would also include a quadrant detector, optics to focus the light returning from the target onto the quadrant detector, and circuitry to synchronize the acquisition of the quadrant-detector output with the arrival of laser light returning from the nearest target. A quadrant detector constantly gathers information from the entire field of view, without scanning; its output is a direct measure of the position of the target-return light spot on the focal plane and is thus a measure of the direction to the target. The instrument should be able to operate at a repetition rate high enough to enable it to track a rapidly moving target. Of course, a target that is not sufficiently reflective could not be located by this instrument. Preferably, retroreflectors should be attached to the target to make it sufficiently reflective

    Optoelectronic Sensor System for Guidance in Docking

    Get PDF
    The Video Guidance Sensor (VGS) system is an optoelectronic sensor that provides automated guidance between two vehicles. In the original intended application, the two vehicles would be spacecraft docking together, but the basic principles of design and operation of the sensor are applicable to aircraft, robots, vehicles, or other objects that may be required to be aligned for docking, assembly, resupply, or precise separation. The system includes a sensor head containing a monochrome charge-coupled- device video camera and pulsed laser diodes mounted on the tracking vehicle, and passive reflective targets on the tracked vehicle. The lasers illuminate the targets, and the resulting video images of the targets are digitized. Then, from the positions of the digitized target images and known geometric relationships among the targets, the relative position and orientation of the vehicles are computed. As described thus far, the VGS system is based on the same principles as those of the system described in "Improved Video Sensor System for Guidance in Docking" (MFS-31150), NASA Tech Briefs, Vol. 21, No. 4 (April 1997), page 9a. However, the two systems differ in the details of design and operation. The VGS system is designed to operate with the target completely visible within a relative-azimuth range of +/-10.5deg and a relative-elevation range of +/-8deg. The VGS acquires and tracks the target within that field of view at any distance from 1.0 to 110 m and at any relative roll, pitch, and/or yaw angle within +/-10deg. The VGS produces sets of distance and relative-orientation data at a repetition rate of 5 Hz. The software of this system also accommodates the simultaneous operation of two sensors for redundanc

    Global Positioning System Synchronized Active Light Autonomous Docking System

    Get PDF
    A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle

    Video Guidance Sensor and Time-of-Flight Rangefinder

    Get PDF
    A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode

    Video Guidance Sensor System With Integrated Rangefinding

    Get PDF
    A video guidance sensor system for use, p.g., in automated docking of a chase vehicle with a target vehicle. The system includes an integrated rangefinder sub-system that uses time of flight measurements to measure range. The rangefinder sub-system includes a pair of matched photodetectors for respectively detecting an output laser beam and return laser beam, a buffer memory for storing the photodetector outputs, and a digitizer connected to the buffer memory and including dual amplifiers and analog-to-digital converters. A digital signal processor processes the digitized output to produce a range measurement
    • …
    corecore