438 research outputs found

    Design and characterization of a CMOS 3-D image sensor based on single photon avalanche diodes

    Get PDF
    The design and characterization of an imaging system is presented for depth information capture of arbitrary three-dimensional (3-D) objects. The core of the system is an array of 32 Ă— 32 rangefinding pixels that independently measure the time-of-flight of a ray of light as it is reflected back from the objects in a scene. A single cone of pulsed laser light illuminates the scene, thus no complex mechanical scanning or expensive optical equipment are needed. Millimetric depth accuracies can be reached thanks to the rangefinder's optical detectors that enable picosecond time discrimination. The detectors, based on a single photon avalanche diode operating in Geiger mode, utilize avalanche multiplication to enhance light detection. On-pixel high-speed electrical amplification can therefore be eliminated, thus greatly simplifying the array and potentially reducing its power dissipation. Optical power requirements on the light source can also be significantly relaxed, due to the array's sensitivity to single photon events. A number of standard performance measurements, conducted on the imager, are discussed in the paper. The 3-D imaging system was also tested on real 3-D subjects, including human facial models, demonstrating the suitability of the approach

    The PANOPTIC Camera: A Plenoptic Sensor with Real-Time Omnidirectional Capability

    Get PDF
    A new biologically-inspired vision sensor made of one hundred "eyes” is presented, which is suitable for real-time acquisition and processing of 3-D image sequences. This device, named the Panoptic camera, consists of a layered arrangement of approximately 100 classical CMOS imagers, distributed over a hemisphere of 13cm in diameter. The Panoptic camera is a polydioptric system where all imagers have their own vision of the world, each with a distinct focal point, which is a specific feature of the Panoptic system. This enables 3-D information recording such as omnidirectional stereoscopy or depth estimation, applying specific signal processing. The algorithms dictating the image reconstruction of an omnidirectional observer located at any point inside the hemisphere are presented. A hardware architecture which has the capability of handling these algorithms, and the flexibility to support additional image processing in real time, has been developed as a two-layer system based on FPGAs. The detail of the hardware architecture, its internal blocks, the mapping of the algorithms onto the latter elements, and the device calibration procedure are presented, along with imaging result

    The Boston University Photonics Center annual report 2016-2017

    Full text link
    This repository item contains an annual report that summarizes activities of the Boston University Photonics Center in the 2016-2017 academic year. The report provides quantitative and descriptive information regarding photonics programs in education, interdisciplinary research, business innovation, and technology development. The Boston University Photonics Center (BUPC) is an interdisciplinary hub for education, research, scholarship, innovation, and technology development associated with practical uses of light.This has undoubtedly been the Photonics Center’s best year since I became Director 10 years ago. In the following pages, you will see highlights of the Center’s activities in the past year, including more than 100 notable scholarly publications in the leading journals in our field, and the attraction of more than 22 million dollars in new research grants/contracts. Last year I had the honor to lead an international search for the first recipient of the Moustakas Endowed Professorship in Optics and Photonics, in collaboration with ECE Department Chair Clem Karl. This professorship honors the Center’s most impactful scholar and one of the Center’s founding visionaries, Professor Theodore Moustakas. We are delighted to haveawarded this professorship to Professor Ji-Xin Cheng, who joined our faculty this year.The past year also marked the launch of Boston University’s Neurophotonics Center, which will be allied closely with the Photonics Center. Leading that Center will be a distinguished new faculty member, Professor David Boas. David and I are together leading a new Neurophotonics NSF Research Traineeship Program that will provide $3M to promote graduate traineeships in this emerging new field. We had a busy summer hosting NSF Sites for Research Experiences for Undergraduates, Research Experiences for Teachers, and the BU Student Satellite Program. As a community, we emphasized the theme of “Optics of Cancer Imaging” at our annual symposium, hosted by Darren Roblyer. We entered a five-year second phase of NSF funding in our Industry/University Collaborative Research Center on Biophotonic Sensors and Systems, which has become the centerpiece of our translational biophotonics program. That I/UCRC continues to focus on advancing the health care and medical device industries

    Advances in Infrared Detector Array Technology

    Get PDF

    The NASA SBIR product catalog

    Get PDF
    The purpose of this catalog is to assist small business firms in making the community aware of products emerging from their efforts in the Small Business Innovation Research (SBIR) program. It contains descriptions of some products that have advanced into Phase 3 and others that are identified as prospective products. Both lists of products in this catalog are based on information supplied by NASA SBIR contractors in responding to an invitation to be represented in this document. Generally, all products suggested by the small firms were included in order to meet the goals of information exchange for SBIR results. Of the 444 SBIR contractors NASA queried, 137 provided information on 219 products. The catalog presents the product information in the technology areas listed in the table of contents. Within each area, the products are listed in alphabetical order by product name and are given identifying numbers. Also included is an alphabetical listing of the companies that have products described. This listing cross-references the product list and provides information on the business activity of each firm. In addition, there are three indexes: one a list of firms by states, one that lists the products according to NASA Centers that managed the SBIR projects, and one that lists the products by the relevant Technical Topics utilized in NASA's annual program solicitation under which each SBIR project was selected

    Robust deep learning for computational imaging through random optics

    Full text link
    Light scattering is a pervasive phenomenon that poses outstanding challenges in both coherent and incoherent imaging systems. The output of a coherent light scattered from a complex medium exhibits a seemingly random speckle pattern that scrambles the useful information of the object. To date, there is no simple solution for inverting such complex scattering. Advancing the solution of inverse scattering problems could provide important insights into applications across many areas, such as deep tissue imaging, non-line-of-sight imaging, and imaging in degraded environment. On the other hand, in incoherent systems, the randomness of scattering medium could be exploited to build lightweight, compact, and low-cost lensless imaging systems that are applicable in miniaturized biomedical and scientific imaging. The imaging capabilities of such computational imaging systems, however, are largely limited by the ill-posed or ill-conditioned inverse problems, which typically causes imaging artifacts and degradation of the image resolution. Therefore, mitigating this issue by developing modern algorithms is essential for pushing the limits of such lensless computational imaging systems. In this thesis, I focus on the problem of imaging through random optics and present two novel deep-learning (DL) based methodologies to overcome the challenges in coherent and incoherent systems: 1) no simple solution for inverse scattering problem and lack of robustness to scattering variations; and 2) ill-posed problem for diffuser-based lensless imaging. In the first part, I demonstrate the novel use of a deep neural network (DNN) to solve the inverse scattering problem in a coherent imaging system. I propose a `one-to-all' deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. I show for the first time, to the best of my knowledge, that the trained CNN is able to generalize and make high-quality object prediction through an entirely different set of diffusers of the same macroscopic parameter. I then push the limit of robustness against a broader class of perturbations including scatterer change, displacements, and system defocus up to 10X depth of field. In the second part, I consider the utility of the random light scattering to build a diffuser-based computational lensless imaging system and present a generally applicable novel DL framework to achieve fast and noise-robust color image reconstruction. I developed a diffuser-based computational funduscope that reconstructs important clinical features of a model eye. Experimentally, I demonstrated fundus image reconstruction over a large field of view (FOV) and robustness to refractive error using a constant point-spread-function. Next, I present a physics simulator-trained, adaptive DL framework to achieve fast and noise-robust color imaging. The physics simulator incorporates optical system modeling, the simulation of mixed Poisson-Gaussian noise, and color filter array induced artifacts in color sensors. The learning framework includes an adaptive multi-channel L2-regularized inversion module and a channel-attention enhancement network module. Both simulation and experiments show consistently better reconstruction accuracy and robustness to various noise levels under different light conditions compared with traditional L2-regularized reconstructions. Overall, this thesis investigated two major classes of problems in imaging through random optics. In the first part of the thesis, my work explored a novel DL-based approach for solving the inverse scattering problem and paves the way to a scalable and robust deep learning approach to imaging through scattering media. In the second part of the thesis, my work developed a broadly applicable adaptive learning-based framework for ill-conditioned image reconstruction and a physics-based simulation model for computational color imaging

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined
    • …
    corecore