2,421 research outputs found

    Development of an image converter of radical design

    Get PDF
    A long term investigation of thin film sensors, monolithic photo-field effect transistors, and epitaxially diffused phototransistors and photodiodes to meet requirements to produce acceptable all solid state, electronically scanned imaging system, led to the production of an advanced engineering model camera which employs a 200,000 element phototransistor array (organized in a matrix of 400 rows by 500 columns) to secure resolution comparable to commercial television. The full investigation is described for the period July 1962 through July 1972, and covers the following broad topics in detail: (1) sensor monoliths; (2) fabrication technology; (3) functional theory; (4) system methodology; and (5) deployment profile. A summary of the work and conclusions are given, along with extensive schematic diagrams of the final solid state imaging system product

    Design Of Neural Network Circuit Inside High Speed Camera Using Analog CMOS 0.35 ¼m Technology

    Get PDF
    Analog VLSI on-chip learning Neural Networks represent a mature technology for a large number of applications involving industrial as well as consumer appliances. This is particularly the case when low power consumption, small size and/or very high speed are required. This approach exploits the computational features of Neural Networks, the implementation efficiency of analog VLSI circuits and the adaptation capabilities of the on-chip learning feedback schema. High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs , readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded Analog Neural Network

    Synthetic retina for AER systems development

    Get PDF
    Neuromorphic engineering tries to mimic biology in information processing. Address-Event Representation (AER) is a neuromorphic communication protocol for spiking neurons between different layers. AER bio-inspired image sensor are called “retina”. This kind of sensors measure visual information not based on frames from real life and generates corresponding events. In this paper we provide an alternative, based on cheap FPGA, to this image sensors that takes images provided by an analog video source (video composite signal), digitalizes it and generates AER streams for testing purposes.Junta de Andalucía P06-TIC-01417Ministerio de Educación y Ciencia TEC2006-11730-C03-0

    From Vision Sensor to Actuators, Spike Based Robot Control through Address-Event-Representation

    Get PDF
    One field of the neuroscience is the neuroinformatic whose aim is to develop auto-reconfigurable systems that mimic the human body and brain. In this paper we present a neuro-inspired spike based mobile robot. From commercial cheap vision sensors converted into spike information, through spike filtering for object recognition, to spike based motor control models. A two wheel mobile robot powered by DC motors can be autonomously controlled to follow a line drown in the floor. This spike system has been developed around the well-known Address-Event-Representation mechanism to communicate the different neuro-inspired layers of the system. RTC lab has developed all the components presented in this work, from the vision sensor, to the robot platform and the FPGA based platforms for AER processing.Ministerio de Ciencia e Innovación TEC2006-11730-C03-02Junta de Andalucía P06-TIC-0141

    Design and development of four to sixteen channel video multiplexers

    Get PDF
    Video multiplexer series were successfully designed and built for prototype and evaluation both in terms of hardware and software. The hardware platform was designed to accommodate up to sixteen color video input channels for time lapse or real time recording on a single video cassette recorder. This product implements four modes of operation; Live, Record, Playback and Menu mode, which is not a full mode of operation. Menu mode is a series of on-screen programming menus which appears on Live, Record and Playback modes. Menu mode enables the user to program the machine to work under specific modes of application. For Video encoding a new video-capture processor, called Bt819 made by BrookTree, was chosen to minimize the cost and system overhead of adding video input and capture to PC video/graphics systems. This development by BrookTree employs the firm\u27s time-tested digital Ultralock technology to generate the required number of pixels per line using fixed frequency clock. On-chip pixel buffering and image scaling are provided for our QUAD picture on a monitor. Inter-integrated circuit (I2C) communication was chosen to talk to this chip directly. The video syncs were generated from PIC microcontroller using assembly language. This program was designed at 13.5 Mhz (74 nsec) clock rate which follows NTSC CCIR-601 digital video standards. Alarm package design idea came from understanding of link-list programming and was tested on four separate video signals

    A Wide Linear Dynamic Range Image Sensor Based on Asynchronous Self-Reset and Tagging of Saturation Events

    Get PDF
    We report a high dynamic range (HDR) image sensor with a linear response that overcomes some of the limitations of sensors with pixels with self-reset operation. It operates similar to an active pixel sensor, but its pixels have a novel asynchronous event-based overflow detection mechanism. Whenever the pixel voltages at the integration capacitance reach a programmable threshold, the pixels self-reset and send out asynchronously an event indicating this. At the end of the integration period, the voltage at the integration capacitance is digitized and readout. Combining this information with the number of events fired by each pixel, it is possible to render linear HDR images. Event operation is transparent to the final user. There is no limitation for the number of self-resets of each pixel. The output data format is compatible with frame-based devices. The sensor was fabricated in the AMS 0.18- μm HV technology. A detailed system description and experimental results are provided in this paper. The sensor can render images with an intra-scene dynamic range of up to 130 dB with linear outputs. The pixels' pitch is 25 μm and the sensor power consumption is 58.6 mW.Universidad de Cádiz PR2016-072Ministerio de Economía y Competitividad TEC2015-66878-C3-1-RJunta de Andalucía TIC 2012-2338Office of Naval Research (USA) N00014141035

    CMOS Vision Sensors: Embedding Computer Vision at Imaging Front-Ends

    Get PDF
    CMOS Image Sensors (CIS) are key for imaging technol-ogies. These chips are conceived for capturing opticalscenes focused on their surface, and for delivering elec-trical images, commonly in digital format. CISs may incor-porate intelligence; however, their smartness basicallyconcerns calibration, error correction and other similartasks. The term CVISs (CMOS VIsion Sensors) definesother class of sensor front-ends which are aimed at per-forming vision tasks right at the focal plane. They havebeen running under names such as computational imagesensors, vision sensors and silicon retinas, among others. CVIS and CISs are similar regarding physical imple-mentation. However, while inputs of both CIS and CVISare images captured by photo-sensors placed at thefocal-plane, CVISs primary outputs may not be imagesbut either image features or even decisions based on thespatial-temporal analysis of the scenes. We may hencestate that CVISs are more “intelligent” than CISs as theyfocus on information instead of on raw data. Actually,CVIS architectures capable of extracting and interpretingthe information contained in images, and prompting reac-tion commands thereof, have been explored for years inacademia, and industrial applications are recently ramp-ing up.One of the challenges of CVISs architects is incorporat-ing computer vision concepts into the design flow. Theendeavor is ambitious because imaging and computervision communities are rather disjoint groups talking dif-ferent languages. The Cellular Nonlinear Network Univer-sal Machine (CNNUM) paradigm, proposed by Profs.Chua and Roska, defined an adequate framework forsuch conciliation as it is particularly well suited for hard-ware-software co-design [1]-[4]. This paper overviewsCVISs chips that were conceived and prototyped at IMSEVision Lab over the past twenty years. Some of them fitthe CNNUM paradigm while others are tangential to it. Allthem employ per-pixel mixed-signal processing circuitryto achieve sensor-processing concurrency in the quest offast operation with reduced energy budget.Junta de Andalucía TIC 2012-2338Ministerio de Economía y Competitividad TEC 2015-66878-C3-1-R y TEC 2015-66878-C3-3-
    corecore