533 research outputs found

    A fast lightstripe rangefinding system with smart VLSI sensor

    Get PDF
    The focus of the research is to build a compact, high performance lightstripe rangefinder using a Very Large Scale Integration (VLSI) smart photosensor array. Rangefinding, the measurement of the three-dimensional profile of an object or scene, is a critical component for many robotic applications, and therefore many techniques were developed. Of these, lightstripe rangefinding is one of the most widely used and reliable techniques available. Though practical, the speed of sampling range data by the conventional light stripe technique is severely limited. A conventional light stripe rangefinder operates in a step-and-repeat manner. A stripe source is projected on an object, a video image is acquired, range data is extracted from the image, the stripe is stepped, and the process repeats. Range acquisition is limited by the time needed to grab the video images, increasing linearly with the desired horizontal resolution. During the acquisition of a range image, the objects in the scene being scanned must be stationary. Thus, the long scene sampling time of step-and-repeat rangefinders limits their application. The fast range sensor proposed is based on the modification of this basic lightstripe ranging technique in a manner described by Sato and Kida. This technique does not require a sampling of images at various stripe positions to build a range map. Rather, an entire range image is acquired in parallel while the stripe source is swept continuously across the scene. Total time to acquire the range image data is independent of the range map resolution. The target rangefinding system will acquire 1,000 100 x 100 point range images per second with 0.5 percent range accuracy. It will be compact and rugged enough to be mounted on the end effector of a robot arm to aid in object manipulation and assembly tasks

    A simple 5-DOF walking robot for space station application

    Get PDF
    Robots on the NASA space station have a potential range of applications from assisting astronauts during EVA (extravehicular activity), to replacing astronauts in the performance of simple, dangerous, and tedious tasks; and to performing routine tasks such as inspections of structures and utilities. To provide a vehicle for demonstrating the pertinent technologies, a simple robot is being developed for locomotion and basic manipulation on the proposed space station. In addition to the robot, an experimental testbed was developed, including a 1/3 scale (1.67 meter modules) truss and a gravity compensation system to simulate a zero-gravity environment. The robot comprises two flexible links connected by a rotary joint, with a 2 degree of freedom wrist joints and grippers at each end. The grippers screw into threaded holes in the nodes of the space station truss, and enable it to walk by alternately shifting the base of support from one foot (gripper) to the other. Present efforts are focused on mechanical design, application of sensors, and development of control algorithms for lightweight, flexible structures. Long-range research will emphasize development of human interfaces to permit a range of control modes from teleoperated to semiautonomous, and coordination of robot/astronaut and multiple-robot teams

    Discriminative Cluster Analysis

    Get PDF

    Visual Odometry by Multi-frame Feature Integration

    Get PDF
    This paper presents a novel stereo-based visual odometry approach that provides state-of-the-art results in real time, both indoors and outdoors. Our proposed method follows the procedure of computing optical flow and stereo disparity to minimize the re-projection error of tracked feature points. However, instead of following the traditional approach of performing this task using only consecutive frames, we propose a novel and computationally inexpensive technique that uses the whole history of the tracked feature points to compute the motion of the camera. In our technique, which we call multi-frame feature integration, the features measured and tracked over all past frames are integrated into a single, improved estimate. An augmented feature set, composed of the improved estimates, is added to the optimization algorithm, improving the accuracy of the computed motion and reducing ego-motion drift. Experimental results show that the proposed approach reduces pose error by up to 65 % with a negligible additional computational cost of 3.8%. Furthermore, our algorithm outperforms all other known methods on the KITTI Vision Benchmark data set. 1

    First results in terrain mapping for a roving planetary explorer

    Get PDF
    To perform planetary exploration without human supervision, a complete autonomous rover must be able to model its environment while exploring its surroundings. Researchers present a new algorithm to construct a geometric terrain representation from a single range image. The form of the representation is an elevation map that includes uncertainty, unknown areas, and local features. By virtue of working in spherical-polar space, the algorithm is independent of the desired map resolution and the orientation of the sensor, unlike other algorithms that work in Cartesian space. They also describe new methods to evaluate regions of the constructed elevation maps to support legged locomotion over rough terrain
    • …
    corecore