266,728 research outputs found

    A versatile sensor interface for programmable vision systems-on-chip

    Get PDF
    This paper describes an optical sensor interface designed for a programmable mixed-signal vision chip. This chip has been designed and manufactured in a standard 0.35μm n-well CMOS technology with one poly layer and five metal layers. It contains a digital shell for control and data interchange, and a central array of 128 × 128 identical cells, each cell corresponding to a pixel. Die size is 11.885 × 12.230mm2 and cell size is 75.7μm × 73.3μm. Each cell contains 198 transistors dedicated to functions like processing, storage, and sensing. The system is oriented to real-time, single-chip image acquisition and processing. Since each pixel performs the basic functions of sensing, processing and storage, data transferences are fully parallel (image-wide). The programmability of the processing functions enables the realization of complex image processing functions based on the sequential application of simpler operations. This paper provides a general overview of the system architecture and functionality, with special emphasis on the optical interface.European Commission IST-1999-19007Office of Naval Research (USA) N00014021088

    ACE 16k based stand-alone system for real-time pre-processing tasks

    Get PDF
    This paper describes the design of a programmable stand-alone system for real time vision pre-processing tasks. The system's architecture has been implemented and tested using an ACE16k chip and a Xilinx xc4028xl FPGA. The ACE16k chip consists basically of an array of 128×128 identical mixed-signal processing units, locally interacting, which operate in accordance with single instruction multiple data (SIMD) computing architectures and has been designed for high speed image pre-processing tasks requiring moderate accuracy levels (7 bits). The input images are acquired using the optical input capabilities of the ACE16k chip, and after being processed according to a programmed algorithm, the images are represented at real time on a TFT screen. The system is designed to store and run different algorithms and to allow changes and improvements. Its main board includes a digital core, implemented on a Xilinx 4028 Series FPGA, which comprises a custom programmable Control Unit, a digital monochrome PAL video generator and an image memory selector. Video SRAM chips are included to store and access images processed by the ACE16k. Two daughter boards hold the program SRAM and a video DAC-mixer card is used to generate composite analog video signal.European Commission IST2001 – 38097Ministerio de Ciencia y Tecnología TIC2003 – 09817- C02 – 01Office of Naval Research (USA) N00014021088

    Optical processing for distributed sensors in control of flexible spacecraft

    Get PDF
    A recent potential of distributed image processing is discussed. Applications in the control of flexible spacecraft are emphasized. Devices are currently being developed at NASA and in universities and industries that allow the real-time processing of holographic images. Within 5 years, it is expected that, in real-time, one may add or subtract holographic images at optical accuracy. Images are stored and processed in crystal mediums. The accuracy of their storage and processing is dictated by the grating level of laser holograms. It is far greater than that achievable using current analog-to-digital, pixel oriented, image digitizing and computing techniques. Processors using image processing algebra can conceptually be designed to mechanize Fourier transforms, least square lattice filters, and other complex control system operations. Thus, actuator command inputs derived from complex control laws involving distributed holographic images can be generated by such an image processor. Plans are revealed for the development of a Conjugate Optics Processor for control of a flexible object

    Accuracy of a video odometry system for trains

    Get PDF
    Reliable Data Systems is developing a video-based odometry system that enables trains to measure velocities and distances travelled without the need for trackside infrastructure. A camera is fixed in the cab, taking images of the track immediately ahead, at rates in the range 25–50 frames per second. The images in successive frames are ‘unwarped’ to provide a plan view of the track and then matched, to produce an ‘optical flow’ that measures the distance travelled. The Study Group was asked to investigate ways of putting bounds on the accuracy of such a system, and to suggest any improvements that might be made. The work performed in the week followed three strands: (a) an understanding of how deviations from the camera’s calibrated position lead to errors in the train’s calculated position and velocity; (b) development of models for the train suspension, designed to place bounds on these deviations; and (c) the performance of the associated image processing algorithms

    Real time wavefront control system for the Large Synoptic Survey Telescope (LSST)

    Get PDF
    The LSST is an integrated, ground based survey system designed to conduct a decade-long time domain survey of the optical sky. It consists of an 8-meter class wide-field telescope, a 3.2 Gpixel camera, and an automated data processing system. In order to realize the scientific potential of the LSST, its optical system has to provide excellent and consistent image quality across the entire 3.5 degree Field of View. The purpose of the Active Optics System (AOS) is to optimize the image quality by controlling the surface figures of the telescope mirrors and maintaining the relative positions of the optical elements. The basic challenge of the wavefront sensor feedback loop for an LSST type 3-mirror telescope is the near degeneracy of the influence function linking optical degrees of freedom to the measured wavefront errors. Our approach to mitigate this problem is modal control, where a limited number of modes (combinations of optical degrees of freedom) are operated at the sampling rate of the wavefront sensing, while the control bandwidth for the barely observable modes is significantly lower. The paper presents a control strategy based on linear approximations to the system, and the verification of this strategy against system requirements by simulations using more complete, non-linear models for LSST optics and the curvature wavefront sensors

    Eco-intelligent monitoring for fouling detection in clean-in-place

    Get PDF
    Clean-in-place (CIP) is a widely used technique applied to clean industrial equipment without disassembly. Cleaning protocols are currently defined arbitrarily from offline measurements. This can lead to excessive resource (water and chemicals) consumption and downtime, further increasing environmental impacts. An optical monitoring system has been developed to assist eco-intelligent CIP process control and improve resource efficiency. The system includes a UV optical fouling monitor designed for real-time image acquisition and processing. The output of the monitoring is such that it can support further intelligent decision support tools for automatic cleaning assessment during CIP phases. This system reduces energy and water consumption, whilst minimising non-productive time: the largest economic cost for CIP

    Real-Time Vision System for License Plate Detection and Recognition on FPGA

    Get PDF
    Rapid development of the Field Programmable Gate Array (FPGA) offers an alternative way to provide acceleration for computationally intensive tasks such as digital signal and image processing. Its ability to perform parallel processing shows the potential in implementing a high speed vision system. Out of numerous applications of computer vision, this paper focuses on the hardware implementation of one that is commercially known as Automatic Number Plate Recognition (ANPR).Morphological operations and Optical Character Recognition (OCR) algorithms have been implemented on a Xilinx Zynq-7000 All-Programmable SoC to realize the functions of an ANPR system. Test results have shown that the designed and implemented processing pipeline that consumed 63 % of the logic resources is capable of delivering the results with relatively low error rate. Most importantly, the computation time satisfies the real-time requirement for many ANPR applications

    Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Get PDF
    This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System) has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei) and the optical transient of GRB (gamma ray bursts) searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric) point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT) with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC) coder based on adaptive median regression

    Design and Implementation of a Scalable Hardware Platform for High Speed Optical Tracking

    Get PDF
    Optical tracking has been an important subject of research since several decades. The utilization of optical tracking systems can be found in a wide range of areas, including military, medicine, industry, entertainment, etc. In this thesis a complete hardware platform that targets high-speed optical tracking applications is presented. The implemented hardware system contains three main components: a high-speed camera which is equipped with a 1.3M pixel image sensor capable of operating at 500 frames per second, a CameraLink grabber which is able to interface three cameras, and an FPGA+Dual-DSP based image processing platform. The hardware system is designed using a modular approach. The flexible architecture enables to construct a scalable optical tracking system, which allows a large number of cameras to be used in the tracking environment. One of the greatest challenges in a multi-camera based optical tracking system is the huge amounts of image data that must be processed in real-time. In this thesis, the study on FPGA based high-speed image processing is performed. The FPGA implementation for a number of image processing operators is described. How to exploit different levels of parallelisms in the algorithm to achieve high processing throughput is explained in detail. This thesis also presents a new single-pass blob analysis algorithm. With an optimized FPGA implementation, the geometrical features of a large number of blobs can be calculated in real-time. At the end of this thesis, a prototype design which integrates all the implemented hardware and software modules is demonstrated to prove the usability of the proposed optical tracking system
    corecore