58 research outputs found

    Zooplankton visualization system: design and real-time lossless image compression

    Get PDF
    In this thesis, I present a design of a small, self-contained, underwater plankton imaging system. I base the imaging system’s design on an embedded PC architecture based on PC/104-Plus standards to meet the compact size and low power requirements. I developed a simple graphical user interface to run on a real-time operating system to control the imaging system. I also address how a real-time image compression scheme implemented on an FPGA chip speeds up image transfer speeds of the imaging system. Since lossless compression of the image is required in order to retain all image details, I began with an established compression scheme like SPIHT, and latter proposed a new compression scheme that suits the imaging system’s requirements. I provide an estimate of the total amount of resources required and propose suitable FPGA chips to implement the compression scheme. Finally, I present various parallel designs by which the FPGA chip can be integrated into the imaging system

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    CMOS SPAD-based image sensor for single photon counting and time of flight imaging

    Get PDF
    The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with significantly lower cost and comparable performance in low light or high speed scenarios. For example, with temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the entanglement of photons may be realised. The goal of this research project is the development of such an image sensor by exploiting single photon avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology. SPADs have three key combined advantages over other imaging technologies: single photon sensitivity, picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an equivalent of 0.06 electrons. The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast readout and oversampled image formation are projected towards the formation of binary single photon imagers or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film. Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm precision in a 60cm range

    Matrix Transform Imager Architecture for On-Chip Low-Power Image Processing

    Get PDF
    Camera-on-a-chip systems have tried to include carefully chosen signal processing units for better functionality, performance and also to broaden the applications they can be used for. Image processing sensors have been possible due advances in CMOS active pixel sensors (APS) and neuromorphic focal plane imagers. Some of the advantages of these systems are compact size, high speed and parallelism, low power dissipation, and dense system integration. One can envision using these chips for portable and inexpensive video cameras on hand-held devices like personal digital assistants (PDA) or cell-phones In neuromorphic modeling of the retina it would be very nice to have processing capabilities at the focal plane while retaining the density of typical APS imager designs. Unfortunately, these two goals have been mostly incompatible. We introduce our MAtrix Transform Imager Architecture (MATIA) that uses analog floating--gate devices to make it possible to have computational imagers with high pixel densities. The core imager performs computations at the pixel plane, but still has a fill-factor of 46 percent - comparable to the high fill-factors of APS imagers. The processing is performed continuously on the image via programmable matrix operations that can operate on the entire image or blocks within the image. The resulting data-flow architecture can directly perform all kinds of block matrix image transforms. Since the imager operates in the subthreshold region and thus has low power consumption, this architecture can be used as a low-power front end for any system that utilizes these computations. Various compression algorithms (e.g. JPEG), that use block matrix transforms, can be implemented using this architecture. Since MATIA can be used for gradient computations, cheap image tracking devices can be implemented using this architecture. Other applications of this architecture can range from stand-alone universal transform imager systems to systems that can compute stereoscopic depth.Ph.D.Committee Chair: Hasler, Paul; Committee Member: David Anderson; Committee Member: DeWeerth, Steve; Committee Member: Jackson, Joel; Committee Member: Smith, Mar

    Diseño CMOS de un sistema de visión “on-chip” para aplicaciones de muy alta velocidad

    Get PDF
    Falta palabras claveEsta Tesis presenta arquitecturas, circuitos y chips para el diseño de sensores de visión CMOS con procesamiento paralelo embebido. La Tesis reporta dos chips, en concreto: El chip Q-Eye; El chip Eye-RIS_VSoC.. Y dos sistemas de visión construidos con estos chips y otros sistemas “off-chip” adicionales, como FPGAs, en concreto: El sistema Eye-RIS_v1; El sistema Eye-RIS_v2. Estos chips y sistemas están concebidos para ejecutar tareas de visión a muy alta velocidad y con consumos de potencia moderados. Los sistemas resultantes son, además, compactos y por lo tanto ventajosos en términos del factor SWaP cuando se los compara con arquitecturas convencionales formadas por sensores de imágenes convencionales seguidos de procesadores digitales. La clave de estas ventajas en términos de SWaP y velocidad radica en el uso de sensores-procesadores, en lugar de meros sensores, en la interface de los sistemas de visión. Estos sensores-procesadores embeben procesadores programables de señal-mixta dentro del pixel y son capaces tanto de adquirir imágenes como de pre-procesarlas para extraer características, eliminar información redundante y reducir el número de datos que se transmiten fuera del sensor para su procesamiento ulterior. El núcleo de la tesis es el sensor-procesador Q-Eye, que se usa como interface en los sistemas Eye-RIS. Este sensor-procesador embebe una arquitectura de procesamiento formada por procesadores de señal-mixta distribuidos por pixel. Sus píxeles son por tanto estructuras multi-funcionales complejas. De hecho, son programables, incorporan memorias e interactúan con sus vecinos para realizar una variedad de operaciones, tales como: Convoluciones lineales con máscaras programables; Difusiones controladas por tiempo y nivel de señal, a través de un “grid” resistivo embebido en el plano focal; Aritmética de imágenes; Flujo de programación dependiente de la señal; Conversión entre los dominios de datos: imagen en escala de grises e imagen binaria; Operaciones lógicas en imágenes binarias; Operaciones morfológicas en imágenes binarias. etc. Con respecto a otros píxeles multi-función y sensores-procesadores anteriores, el Q-Eye reporta entre otras las siguientes ventajas: Mayor calidad de la imagen y mejores prestaciones de las funcionalidades embebidas en el chip; Mayor velocidad de operación y mejor gestión de la energía disponible; Mayor versatilidad para integración en sistemas de visión industrial. De hecho, los sistemas Eye-RIS son los primeros sistemas de visión industriales dotados de las siguientes características: Procesamiento paralelo distribuido y progresivo; Procesadores de señal-mixta fiables, robustos y con errores controlados; Programabilidad distribuida. La Tesis incluye descripciones detalladas de la arquitectura y los circuitos usados en el pixel del Q-Eye, del propio chip Q-Eye y de los sistemas de visión construidos en base a este chip. Se incluyen también ejemplos de los distintos chips en operaciónThis Thesis presents architectures, circuits and chips for the implementation of CMOS VISION SENSORS with embedded parallel processing. The Thesis reports two chips, namely: Q-eye chip; Eye-RIS_VSoC chip, and two vision systems realized by using these chips and some additional “off-chip” circuitry, such as FPGAs. These vision systems are: Eye-RIS_v1 system; Eye-RIS_v2 system. The chips and systems reported in the Thesis are conceived to perform vision tasks at very high speed and with moderate power consumption. The proposed vision systems are also compact and advantageous in terms of SWaP factors as compared with conventional architectures consisting of standard image sensor followed by digital processors. The key of these advantages in terms of SWaP and speed lies in the use of sensors-processors, rather than mere sensors, in the front-end interface of vision systems. These sensors-processors embed mixed-signal programmable processors inside the pixel. Therefore, they are able to acquire images and process them to extract the features, removing the redundant information and reducing the data throughput for later processing. The core of the Thesis is the sensor-processor Q-Eye, which is used as front-end in the Eye-RIS systems. This sensor-processor embeds a processing architecture composed by mixed-signal processors distributed per pixel. Then, its pixels are complex multi-functional structures. In fact, they are programmable, incorporate memories and interact with its neighbors in order to carry out a set of operations, including: Linear convolutions with programmable linear masks; Time- and signal-controlled diffusions (by means of an embedded resistive grid); Image arithmetic; Signal-dependent data scheduling; Gray-scale to binary transformation; Logic operation on binary images; Mathematical morphology on binary images, etc. As compared with previous multi-function pixels and sensors-processors, the Q-Eye brings among other the following advantages: Higher image quality and better performances of functionalities embedded on chip; Higher operation speed and better management of energy budget; More versatility for integration in industrial vision systems. In fact, the Eye-RIS systems are the first industrial vision systems equipped with the following characteristics: Parallel distributed and progressive processing; Reliable, robust mixed-signal processors with handled errors; Distributed programmability. This Thesis includes detailed descriptions of architecture and circuits used in the Q-Eye pixel, in the Q-Eye chip itself and in the vision systems developed based on this chip. Also, several examples of chips and systems in operation are presented

    Trade-off analysis of modes of data handling for earth resources (ERS), volume 1

    Get PDF
    Data handling requirements are reviewed for earth observation missions along with likely technology advances. Parametric techniques for synthesizing potential systems are developed. Major tasks include: (1) review of the sensors under development and extensions of or improvements in these sensors; (2) development of mission models for missions spanning land, ocean, and atmosphere observations; (3) summary of data handling requirements including the frequency of coverage, timeliness of dissemination, and geographic relationships between points of collection and points of dissemination; (4) review of data routing to establish ways of getting data from the collection point to the user; (5) on-board data processing; (6) communications link; and (7) ground data processing. A detailed synthesis of three specific missions is included

    Robust real-time control of an adaptive optics system

    Get PDF
    This research contributes to the understanding of the limitations when designing a robust control real-time system for Adaptive Optics (AO). One part of the research is a new method regarding the evaluation of a Shack-Hartmann wavefront sensor (SHWFS) to enhance the overall performance. The method is presented based on the application of a Field Programmable Gate Array (FPGA) using Connected Component Labeling (CCL) for blob detection. The FPGA has been utilized since the resulting delay is crucial for the general AO performance. In this regard, the FPGA may accelerate the evaluation largely by its parallelism. The developed algorithm does not rely on a fixed assignment of the camera sensor area to the lenslet array to maximize the dynamic range. In extension to the SHWFS evaluation, a new rapid control prototyping (RCP) system based on hard real-time RTAI-patched Linux kernel has been developed. This system includes the required hardware e.g.~the analog output cards and FPGA based frame-grabber. Based upon a Simulink model, accelerated C/C++ code is automatically generated which uses the available parallel features of the processor. A continuative contribution is the application of a robust control scheme using H-infinity methods for designing a controller while considering uncertainty of the identified model. For synthesizing the controller, a special optimization technique called non-smooth mu-synthesis is utilized which minimizes the H-infinity norm while coping with pre-specified controller schemes. Depending on the pre-specified controller scheme, the resulting controller can be computationally costly but the RCP approach is designed to cope with the problem. Based on simulations and according to experiments, the validity of the identified models of the AO setup is assured. At the same time, the enhanced performance of the new RCP setup is demonstrated.Die wissenschaftliche Arbeit trägt maßgeblich zum Verständnis der gängigen Limitierungen bei robusten echtzeit-fähigen Regelungssystemen für Adaptiv Optische (AO) Systeme bei. Ein wesentlicher Teil der Arbeit befasst sich mit einer neuartigen Methode der Auswertung eines Shack-Hartmann Wellenfrontsensors (SHWFS). Die Methode basiert auf der Anwendung eines Field Programmable Gate Arrays (FPGA) zur Auswertung des SHWFS. Die zu Grunde liegende Methode ist ein Resultat der Graphentheorie zur Erkennung zusammenhängender Bildbereiche. Der Einsatz eines FPGA ermöglicht hierbei, dass die resultierende Verzögerung durch die Auswertung des SHWFS auf ein Minimum reduziert wird. Hinzu kommt, dass die neuartige Auswertungsmethode den dynamischen Bereich des Wellenfrontsensors gegenüber dem üblichen Verfahren erweitert, da für die Methode keine feste Zuordnung der Spots zu dem Linsenarray notwendig ist. Zusätzlich zu dem neuartigen Verfahren für die Auswertung wurde ein Rapid Control Prototyping (RCP) System entworfen, welches auf einem echtzeitfähigen Linux Kernel basiert. Die Echtzeitfähigkeit wird durch die Verwendung des Real-Time Application Interface for Linux (RTAI) erreicht. Der Entwurf des RCP Systems umfasst die Entwicklung spezieller Hardware wie beispielsweise eine analoge Ausgangskarte und der PCIe FPGA Framegrabber. Aus einem Simulink Modell wird automatisch C/C++ Quellcode generiert. Dieser generierte Code nutzt die vorhandenen parallelen Erweiterungen des Prozessors zur Beschleunigung der vorkommenden Berechnungen. Basierend auf diesem System wurde ein robustes Regelungsverfahren angewendet, welches auf der H_infty Entwurfsmethodik basiert. Das Entwurfverfahren des Reglers (non-smooth mu Synthese) berücksichtigt die vorhandene Unsicherheit der identifizierten Modelle bereits während der Synthese. Das Verfahren ermöglicht die H_infty Norm des geschlossenen Regelkreises zu minimieren, wobei die Regler-Struktur vorgegeben werden kann. Basierend auf verschiedenen Simulationen und experimentellen Versuchen wurde die Gültigkeit der identifizierten Modelle des AO Systems nachgewiesen. Zudem wird gezeigt, dass das entworfene RCP System deutlich leistungsfähiger als vergleichbare Systeme ist und somit eine deutlich verbesserte Performance aufweist
    corecore