5 research outputs found

    A novel approach to robot vision using a hexagonal grid and spiking neural networks

    Get PDF
    Many robots use range data to obtain an almost 3-dimensional description of their environment. Feature driven segmentation of range images has been primarily used for 3D object recognition, and hence the accuracy of the detected features is a prominent issue. Inspired by the structure and behaviour of the human visual system, we present an approach to feature extraction in range data using spiking neural networks and a biologically plausible hexagonal pixel arrangement. Standard digital images are converted into a hexagonal pixel representation and then processed using a spiking neural network with hexagonal shaped receptive fields; this approach is a step towards developing a robotic eye that closely mimics the human eye. The performance is compared with receptive fields implemented on standard rectangular images. Results illustrate that, using hexagonally shaped receptive fields, performance is improved over standard rectangular shaped receptive fields

    3D-deflectometry : fast nanotopography measurement for the semiconductor industry

    Get PDF

    Design of an Active Stereo Vision 3D Scene Reconstruction System Based on the Linear Position Sensor Module

    Get PDF
    Active vision systems and passive vision systems currently exist for three-dimensional (3D) scene reconstruction. Active systems use a laser that interacts with the scene. Passive systems implement stereo vision, using two cameras and geometry to reconstruct the scene. Each type of system has advantages and disadvantages in resolution, speed, and scene depth. It may be possible to combine the advantages of both systems as well as new hardware technologies such as position sensitive devices (PSDs) and field programmable gate arrays (FPGAs) to create a real-time, mid-range 3D scene reconstruction system. Active systems usually reconstruct long-range scenes so that a measurable amount of time can pass for the laser to travel to the scene and back. Passive systems usually reconstruct close-range scenes but must overcome the correspondence problem. If PSDs are placed in a stereo vision configuration and a laser is directed at the scene, the correspondence problem can be eliminated. The laser can scan the entire scene as the PSDs continually pick up points, and the scene can be reconstructed. By eliminating the correspondence problem, much of the computation time of stereo vision is removed, allowing larger scenes, possibly at mid-range, to be modeled. To give good resolution at a real-time frame rate, points would have to be recorded very quickly. PSDs are analog devices that give the position of a light spot and have very fast response times. The cameras in the system can be replaced by PSDs to help achieve real- time refresh rates and better resolution. A contribution of this thesis is to design a 3D scene reconstruction system by placing two PSDs in a stereo vision configuration and to use FPGAs to perform calculations to achieve real-time frame rates of mid-range scenes. The linear position sensor module (LPSM) made by Noah Corp is based on a PSD and outputs a position in terms of voltage. The LPSM is characterized for this application by testing it with different power lasers while also varying environment variables such as background light, scene type, and scene distance. It is determined that the LPSM is sensitive to red wavelength lasers. When the laser is reflected off of diffuse surfaces, the laser must output at least 500 mW to be picked up by the LPSM and the scene must be within 15 inches, or the power intensity will not meet the intensity requirements of the LPSM. The establishment of these performance boundaries is a contribution of the thesis along with characterizing and testing the LPSM as a vision sensor in the proposed scene reconstruction system. Once performance boundaries are set, the LPSM is used to model calibrated objects. LPSM sensitivity to power intensity changes seems to cause considerable error. The change in power appears to be a function of depth due to the dispersion of the laser beam. The model is improved by using a correction factor to find the position of the light spot. Using a better-focused laser may improve the results. Another option is to place two PSDs in the same configuration and test to see whether the intensity problem is intrinsic to all PSDs or if the problem is unique to the LPSM

    High-speed acquisition of range images

    No full text
    We introduce a smart image sensor, the PSD-chip, designed for sheet-of-light range imaging. The sensor area consists of an array of position sensitive detector (PSD)-strips. The on-chip signal processing electronics is built up from both analog and digital circuitry. Our aim is to be able to record 1.5 million range values (rangels) per second at a 12-bits resolution. © 1996 IEEE

    The PSD chip - high speed acquisition of range images

    No full text
    Applied Science
    corecore