3,917 research outputs found

    Reflexive obstacle avoidance for kinematically-redundant manipulators

    Get PDF
    Dexterous telerobots incorporating 17 or more degrees of freedom operating under coordinated, sensor-driven computer control will play important roles in future space operations. They will also be used on Earth in assignments like fire fighting, construction and battlefield support. A real time, reflexive obstacle avoidance system, seen as a functional requirement for such massively redundant manipulators, was developed using arm-mounted proximity sensors to control manipulator pose. The project involved a review and analysis of alternative proximity sensor technologies for space applications, the development of a general-purpose algorithm for synthesizing sensor inputs, and the implementation of a prototypical system for demonstration and testing. A 7 degree of freedom Robotics Research K-2107HR manipulator was outfitted with ultrasonic proximity sensors as a testbed, and Robotics Research's standard redundant motion control algorithm was modified such that an object detected by sensor arrays located at the elbow effectively applies a force to the manipulator elbow, normal to the axis. The arm is repelled by objects detected by the sensors, causing the robot to steer around objects in the workspace automatically while continuing to move its tool along the commanded path without interruption. The mathematical approach formulated for synthesizing sensor inputs can be employed for redundant robots of any kinematic configuration

    A high resolution full-field range imaging system for robotic devices

    Get PDF
    There has been considerable effort by many researchers to develop a high resolution full-field range imaging system. Traditionally these systems rely on a homodyne technique that modulates the illumination source and shutter speed at some high frequency. These systems tend to suffer from the need to be calibrated to account for changing ambient light conditions and generally cannot provide better than single centimeter range resolution, and even then over a range of only a few meters. We present a system, tested to proof-of-concept stage that is being developed for use on a range of mobile robots. The system has the potential for real-time, sub millimeter range resolution, with minimal power and space requirements

    Characterization of modulated time-of-flight range image sensors

    Get PDF
    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10–100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements

    3D Cameras: 3D Computer Vision of Wide Scope

    Get PDF
    The human visual sense is the one among all other senses that gathers most information we receive. Evolution has optimized our visual system to negotiate one's way in three dimensions even through cluttered environments. For perceiving 3D information, the human brain uses three important principles: stereo vision, motion parallax and a-priori knowledge about the perspective appearance of objects in dependency of their distance. These tasks pose a challenge to computer vision since decades. Today the most common techniques for 3D sensing are CCD- or CMOS-camera, laser scanner or 3D time-of-flight camera based. Even though evolution has shown predominance for passive stereo vision systems, three additional problems are remaining for 3D perception compared with the two mentioned active vision systems above. First, the computation needs a great deal of performance, since the correlation of two images from a different point of view has to be found. Second, distances to structureless surfaces cannot be measured, if the perspective projection of the object is larger than the camera’s field of view. This problem is often called aperture problem. Finally, a passive visual sensor has to cope with shadowing effects and changes in illumination over time. That is why for mapping purposes mostly active vision systems like laser scanners are used , e.g. [Thrun et al., 2000], [Wulf & Wagner, 2003], [Surmann et al., 2003]. But these approaches are usually not applicable to tasks considering environment dynamics. Due to this restriction, 3D cameras [CSEM SA, 2007], [PMDTec, 2007] have attracted attention since their invention nearly a decade ago. Distance measurements are also based on a time-of-flight principle but with an important difference. Instead of sampling laser beams serially to acquire distance data point-wise, the entire scene is measured in parallel with a modulated surface. This principle allows for higher frame rates and thus enables the consideration of environment dynamics. The first part of this chapter discusses the physical principles of 3D sensors, which are commonly used in the robotics community for typical problems like mapping and navigation. The second part concentrates on 3D cameras, their assets, drawbacks and perspectives. Based on these examining parts, some solutions are discussed that handle common problems occurring in dynamic environments with changing lighting conditions. Finally, it is shown in the last part of this chapter how 3D cameras can be applied to mapping, object localization and feature tracking tasks

    Design and construction of a configurable full-field range imaging system for mobile robotic applications

    Get PDF
    Mobile robotic devices rely critically on extrospection sensors to determine the range to objects in the robot’s operating environment. This provides the robot with the ability both to navigate safely around obstacles and to map its environment and hence facilitate path planning and navigation. There is a requirement for a full-field range imaging system that can determine the range to any obstacle in a camera lens’ field of view accurately and in real-time. This paper details the development of a portable full-field ranging system whose bench-top version has demonstrated sub-millimetre precision. However, this precision required non-real-time acquisition rates and expensive hardware. By iterative replacement of components, a portable, modular and inexpensive version of this full-field ranger has been constructed, capable of real-time operation with some (user-defined) trade-off with precision

    Range imager performance comparison in homodyne and heterodyne operating modes

    Get PDF
    Range imaging cameras measure depth simultaneously for every pixel in a given field of view. In most implementations the basic operating principles are the same. A scene is illuminated with an intensity modulated light source and the reflected signal is sampled using a gain-modulated imager. Previously we presented a unique heterodyne range imaging system that employed a bulky and power hungry image intensifier as the high speed gain-modulation mechanism. In this paper we present a new range imager using an internally modulated image sensor that is designed to operate in heterodyne mode, but can also operate in homodyne mode. We discuss homodyne and heterodyne range imaging, and the merits of the various types of hardware used to implement these systems. Following this we describe in detail the hardware and firmware components of our new ranger. We experimentally compare the two operating modes and demonstrate that heterodyne operation is less sensitive to some of the limitations suffered in homodyne mode, resulting in better linearity and ranging precision characteristics. We conclude by showing various qualitative examples that demonstrate the system’s three-dimensional measurement performance

    Efficient and Fast Implementation of Embedded Time-of-Flight Ranging System Based on FPGAs

    Get PDF
    • 

    corecore