6 research outputs found

    Foveated Sampling Architectures for CMOS Image Sensors

    Get PDF
    Electronic imaging technologies are faced with the challenge of power consumption when transmitting large amounts of image data from the acquisition imager to the display or processing devices. This is especially a concern for portable applications, and becomes more prominent in increasingly high-resolution, high-frame rate imagers. Therefore, new sampling techniques are needed to minimize transmitted data, while maximizing the conveyed image information. From this point of view, two approaches have been proposed and implemented in this thesis: A system-level approach, in which the classical 1D row sampling CMOS imager is modified to a 2D ring sampling pyramidal architecture, using the same standard three transistor (3T) active pixel sensor (APS). A device-level approach, in which the classical orthogonal architecture has been preserved while altering the APS device structure, to design an expandable multiresolution image sensor. A new scanning scheme has been suggested for the pyramidal image sensor, resulting in an intrascene foveated dynamic range (FDR) similar in profile to that of the human eye. In this scheme, the inner rings of the imager have a higher dynamic range than the outer rings. The pyramidal imager transmits the sampled image through 8 parallel output channels, allowing higher frame rates. The human eye is known to have less sensitivity to oblique contrast. Using this fact on the typical oblique distribution of fixed pattern noise, we demonstrate lower perception of this noise than the orthogonal FPN distribution of classical CMOS imagers. The multiresolution image sensor principle is based on averaging regions of low interest from frame-sampled image kernels. One pixel is read from each kernel while keeping pixels in the region of interest at their high resolution. This significantly reduces the transferred data and increases the frame rate. Such architecture allows for programmability and expandability of multiresolution imaging applications

    Matrix Transform Imager Architecture for On-Chip Low-Power Image Processing

    Get PDF
    Camera-on-a-chip systems have tried to include carefully chosen signal processing units for better functionality, performance and also to broaden the applications they can be used for. Image processing sensors have been possible due advances in CMOS active pixel sensors (APS) and neuromorphic focal plane imagers. Some of the advantages of these systems are compact size, high speed and parallelism, low power dissipation, and dense system integration. One can envision using these chips for portable and inexpensive video cameras on hand-held devices like personal digital assistants (PDA) or cell-phones In neuromorphic modeling of the retina it would be very nice to have processing capabilities at the focal plane while retaining the density of typical APS imager designs. Unfortunately, these two goals have been mostly incompatible. We introduce our MAtrix Transform Imager Architecture (MATIA) that uses analog floating--gate devices to make it possible to have computational imagers with high pixel densities. The core imager performs computations at the pixel plane, but still has a fill-factor of 46 percent - comparable to the high fill-factors of APS imagers. The processing is performed continuously on the image via programmable matrix operations that can operate on the entire image or blocks within the image. The resulting data-flow architecture can directly perform all kinds of block matrix image transforms. Since the imager operates in the subthreshold region and thus has low power consumption, this architecture can be used as a low-power front end for any system that utilizes these computations. Various compression algorithms (e.g. JPEG), that use block matrix transforms, can be implemented using this architecture. Since MATIA can be used for gradient computations, cheap image tracking devices can be implemented using this architecture. Other applications of this architecture can range from stand-alone universal transform imager systems to systems that can compute stereoscopic depth.Ph.D.Committee Chair: Hasler, Paul; Committee Member: David Anderson; Committee Member: DeWeerth, Steve; Committee Member: Jackson, Joel; Committee Member: Smith, Mar

    Linear array CMOS detectors for laser doppler blood flow imaging

    Get PDF
    Laser Doppler blood flow imaging is well established as a tool for clinical research. The technique has considerable potential as an aid to diagnosis and as a treatment aid in a number of situations. However, to make widespread clinical use of a blood flow imager feasible a number of refinements are required to make the device easy to use, accurate and safe. Existing LDBF systems consist of 2D imaging systems, and single point scanning systems. 2D imaging systems can offer fast image acquisition time, and hence high frame rate. However, these require high laser power to illuminate the entire target area with sufficient power. Single point scanning systems allow lower laser power to be used, but building up an image of flow in skin requires mechanical scanning of the laser, which results in a high image acquisition time, making the system awkward to use. A new approach developed here involves scanning a line along a target, and imaging the line with a 1D sensor array. This means that only one axis of mechanical scanning is required, reducing the scanning speed, and the laser power is vastly reduced from that required for a 2D system. This approach lends itself well to the use of integrated CMOS detectors, as the smaller pixel number means that a linear sensor array can be implemented on an IC which has integrated processing while keeping overall IC size, and hence cost, lower than equivalent 2D imaging systems. A number of front-end and processing circuits are investigated in terms of their suitability for this application. This is done by simulating a range of possible designs, including several logarithmic pixels, active pixel sensors and opamp-based linear front-ends. Where possible previously fabricated ICs using similar sensors were tested in a laser Doppler flowmetry system to verify simulation results. A first prototype IC (known as BVIPS1) implements a 64x1 array of buffered logarithmic pixels, chosen for their combination of sufficient gain and bandwidth and compact size. The IC makes use of the space available to include two front-end circuits per pixel, allowing other circuits to be prototyped. This allows a linear front-end based on opamps to be tested. It is found that both designs can detect changes in blood flow despite significant discrepancies between simulated and measured IC performance. However, the signal-noise ratio for flux readings is high, and the logarithmic pixel array suffers from high fixed pattern noise, and noise and distortion that makes vein location impossible. A second prototype IC (BVIPS2) consists of dual 64x1 arrays, and integrated processing. The sensor arrays are a logarithmic array, which addresses the problems of the first IC and uses alternative, individually selectable front-ends for each pixel to reduce fixed-pattern noise, and an array of opamp-based linear detectors. Simulation and initial testing is performed to show that this design operates as intended, and partially overcomes the problems found on the previous IC - the IC shows reduced fixed pattern noise and better spatial detection of blood flow changes, although there is still significant noise

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches
    corecore