425 research outputs found

    Advanced photon counting techniques for long-range depth imaging

    Get PDF
    The Time-Correlated Single-Photon Counting (TCSPC) technique has emerged as a candidate approach for Light Detection and Ranging (LiDAR) and active depth imaging applications. The work of this Thesis concentrates on the development and investigation of functional TCSPC-based long-range scanning time-of-flight (TOF) depth imaging systems. Although these systems have several different configurations and functions, all can facilitate depth profiling of remote targets at low light levels and with good surface-to-surface depth resolution. Firstly, a Superconducting Nanowire Single-Photon Detector (SNSPD) and an InGaAs/InP Single-Photon Avalanche Diode (SPAD) module were employed for developing kilometre-range TOF depth imaging systems at wavelengths of ~1550 nm. Secondly, a TOF depth imaging system at a wavelength of 817 nm that incorporated a Complementary Metal-Oxide-Semiconductor (CMOS) 32×32 Si-SPAD detector array was developed. This system was used with structured illumination to examine the potential for covert, eye-safe and high-speed depth imaging. In order to improve the light coupling efficiency onto the detectors, the arrayed CMOS Si-SPAD detector chips were integrated with microlens arrays using flip-chip bonding technology. This approach led to the improvement in the fill factor by up to a factor of 15. Thirdly, a multispectral TCSPC-based full-waveform LiDAR system was developed using a tunable broadband pulsed supercontinuum laser source which can provide simultaneous multispectral illumination, at wavelengths of 531, 570, 670 and ~780 nm. The investigated multispectral reflectance data on a tree was used to provide the determination of physiological parameters as a function of the tree depth profile relating to biomass and foliage photosynthetic efficiency. Fourthly, depth images were estimated using spatial correlation techniques in order to reduce the aggregate number of photon required for depth reconstruction with low error. A depth imaging system was characterised and re-configured to reduce the effects of scintillation due to atmospheric turbulence. In addition, depth images were analysed in terms of spatial and depth resolution

    Propuesta de arquitectura y circuitos para la mejora del rango dinámico de sistemas de visión en un chip diseñados en tecnologías CMOS profundamente submicrométrica

    Get PDF
    El trabajo presentado en esta tesis trata de proponer nuevas técnicas para la expansión del rango dinámico en sensores electrónicos de imagen. En este caso, hemos dirigido nuestros estudios hacia la posibilidad de proveer dicha funcionalidad en un solo chip. Esto es, sin necesitar ningún soporte externo de hardware o software, formando un tipo de sistema denominado Sistema de Visión en un Chip (VSoC). El rango dinámico de los sensores electrónicos de imagen se define como el cociente entre la máxima y la mínima iluminación medible. Para mejorar este factor surgen dos opciones. La primera, reducir la mínima luz medible mediante la disminución del ruido en el sensor de imagen. La segunda, incrementar la máxima luz medible mediante la extensión del límite de saturación del sensor. Cronológicamente, nuestra primera opción para mejorar el rango dinámico se basó en reducir el ruido. Varias opciones se pueden tomar para mejorar la figura de mérito de ruido del sistema: reducir el ruido usando una tecnología CIS o usar circuitos dedicados, tales como calibración o auto cero. Sin embargo, el uso de técnicas de circuitos implica limitaciones, las cuales sólo pueden ser resueltas mediante el uso de tecnologías no estándar que están especialmente diseñadas para este propósito. La tecnología CIS utilizada está dirigida a la mejora de la calidad y las posibilidades del proceso de fotosensado, tales como sensibilidad, ruido, permitir imagen a color, etcétera. Para estudiar las características de la tecnología en más detalle, se diseñó un chip de test, lo cual permite extraer las mejores opciones para futuros píxeles. No obstante, a pesar de un satisfactorio comportamiento general, las medidas referentes al rango dinámico indicaron que la mejora de este mediante sólo tecnología CIS es muy limitada. Es decir, la mejora de la corriente oscura del sensor no es suficiente para nuestro propósito. Para una mayor mejora del rango dinámico se deben incluir circuitos dentro del píxel. No obstante, las tecnologías CIS usualmente no permiten nada más que transistores NMOS al lado del fotosensor, lo cual implica una seria restricción en el circuito a usar. Como resultado, el diseño de un sensor de imagen con mejora del rango dinámico en tecnologías CIS fue desestimado en favor del uso de una tecnología estándar, la cual da más flexibilidad al diseño del píxel. En tecnologías estándar, es posible introducir una alta funcionalidad usando circuitos dentro del píxel, lo cual permite técnicas avanzadas para extender el límite de saturación de los sensores de imagen. Para este objetivo surgen dos opciones: adquisición lineal o compresiva. Si se realiza una adquisición lineal, se generarán una gran cantidad de datos por cada píxel. Como ejemplo, si el rango dinámico de la escena es de 120dB al menos se necesitarían 20-bits/píxel, log2(10120/20)=19.93, para la representación binaria de este rango dinámico. Esto necesitaría de amplios recursos para procesar esta gran cantidad de datos, y un gran ancho de banda para moverlos al circuito de procesamiento. Para evitar estos problemas, los sensores de imagen de alto rango dinámico usualmente optan por utilizar una adquisición compresiva de la luz. Por lo tanto, esto implica dos tareas a realizar: la captura y la compresión de la imagen. La captura de la imagen se realiza a nivel de píxel, en el dispositivo fotosensor, mientras que la compresión de la imagen puede ser realizada a nivel de píxel, de sistema, o mediante postprocesado externo. Usando el postprocesado, existe un campo de investigación que estudia la compresión de escenas de alto rango dinámico mientras se mantienen los detalles, produciendo un resultado apropiado para la percepción humana en monitores convencionales de bajo rango dinámico. Esto se denomina Mapeo de Tonos (Tone Mapping) y usualmente emplea solo 8-bits/píxel para las representaciones de imágenes, ya que éste es el estándar para las imágenes de bajo rango dinámico. Los píxeles de adquisición compresiva, por su parte, realizan una compresión que no es dependiente de la escena de alto rango dinámico a capturar, lo cual implica una baja compresión o pérdida de detalles y contraste. Para evitar estas desventajas, en este trabajo, se presenta un píxel de adquisición compresiva que aplica una técnica de mapeo de tonos que permite la captura de imágenes ya comprimidas de una forma optimizada para mantener los detalles y el contraste, produciendo una cantidad muy reducida de datos. Las técnicas de mapeo de tonos ejecutan normalmente postprocesamiento mediante software en un ordenador sobre imágenes capturadas sin compresión, las cuales contienen una gran cantidad de datos. Estas técnicas han pertenecido tradicionalmente al campo de los gráficos por ordenador debido a la gran cantidad de esfuerzo computacional que requieren. Sin embargo, hemos desarrollado un nuevo algoritmo de mapeo de tonos especialmente adaptado para aprovechar los circuitos dentro del píxel y que requiere un reducido esfuerzo de computación fuera de la matriz de píxeles, lo cual permite el desarrollo de un sistema de visión en un solo chip. El nuevo algoritmo de mapeo de tonos, el cual es un concepto matemático que puede ser simulado mediante software, se ha implementado también en un chip. Sin embargo, para esta implementación hardware en un chip son necesarias algunas adaptaciones y técnicas avanzadas de diseño, que constituyen en sí mismas otra de las contribuciones de este trabajo. Más aún, debido a la nueva funcionalidad, se han desarrollado modificaciones de los típicos métodos a usar para la caracterización y captura de imágenes

    Design of A Saccadic Active Vision System

    Get PDF
    Human vision is remarkable. By limiting the main concentration of high-acuity photoreceptors to the eye's central fovea region, we efficiently view the world by redirecting the fovea between points of interest using eye movements called saccades. Part I describes a saccadic vision system prototype design. The dual-resolution saccadic camera detects objects of interest in a scene by processing low-resolution image information; it then revisits salient regions in high-resolution. The end product is a dual-resolution image in which background information is displayed in low-resolution, and salient areas are captured in high-acuity. This lends to a resource-efficient active vision system. Part II describes CMOS image sensor designs for active vision. Specifically, this discussion focuses on methods to determine regions of interest and achieve high dynamic range on the sensor

    Amorphous silicon e 3D sensors applied to object detection

    Get PDF
    Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm

    LUCI onboard Lagrange, the Next Generation of EUV Space Weather Monitoring

    Full text link
    LUCI (Lagrange eUv Coronal Imager) is a solar imager in the Extreme UltraViolet (EUV) that is being developed as part of the Lagrange mission, a mission designed to be positioned at the L5 Lagrangian point to monitor space weather from its source on the Sun, through the heliosphere, to the Earth. LUCI will use an off-axis two mirror design equipped with an EUV enhanced active pixel sensor. This type of detector has advantages that promise to be very beneficial for monitoring the source of space weather in the EUV. LUCI will also have a novel off-axis wide field-of-view, designed to observe the solar disk, the lower corona, and the extended solar atmosphere close to the Sun-Earth line. LUCI will provide solar coronal images at a 2-3 minute cadence in a pass-band centred on 19.5 nm. Observations made through this pass-band allow for the detection and monitoring of semi-static coronal structures such as coronal holes, prominences, and active regions; as well as transient phenomena such as solar flares, limb Coronal Mass Ejections (CMEs), EUV waves, and coronal dimmings. The LUCI data will complement EUV solar observations provided by instruments located along the Sun-Earth line such as PROBA2-SWAP, SUVI-GOES and SDO-AIA, as well as provide unique observations to improve space weather forecasts. Together with a suite of other remote-sensing and in-situ instruments onboard Lagrange, LUCI will provide science quality operational observations for space weather monitoring

    NASA Tech Briefs, June 2012

    Get PDF
    Topics covered include: iGlobe Interactive Visualization and Analysis of Spatial Data; Broad-Bandwidth FPGA-Based Digital Polyphase Spectrometer; Small Aircraft Data Distribution System; Earth Science Datacasting v2.0; Algorithm for Compressing Time-Series Data; Onboard Science and Applications Algorithm for Hyperspectral Data Reduction; Sampling Technique for Robust Odorant Detection Based on MIT RealNose Data; Security Data Warehouse Application; Integrated Laser Characterization, Data Acquisition, and Command and Control Test System; Radiation-Hard SpaceWire/Gigabit Ethernet-Compatible Transponder; Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager; High-Voltage, Low-Power BNC Feedthrough Terminator; SpaceCube Mini; Dichroic Filter for Separating W-Band and Ka-Band; Active Mirror Predictive and Requirement Verification Software (AMP-ReVS); Navigation/Prop Software Suite; Personal Computer Transport Analysis Program; Pressure Ratio to Thermal Environments; Probabilistic Fatigue Damage Program (FATIG); ASCENT Program; JPL Genesis and Rapid Intensification Processes (GRIP) Portal; Data::Downloader; Fault Tolerance Middleware for a Multi-Core System; DspaceOgreTerrain 3D Terrain Visualization Tool; Trick Simulation Environment 07; Geometric Reasoning for Automated Planning; Water Detection Based on Color Variation; Single-Layer, All-Metal Patch Antenna Element with Wide Bandwidth; Scanning Laser Infrared Molecular Spectrometer (SLIMS); Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy; Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole; Enhanced-Adhesion Multiwalled Carbon Nanotubes on Titanium Substrates for Stray Light Control; Three-Dimensional Porous Particles Composed of Curved, Two-Dimensional, Nano-Sized Layers for Li-Ion Batteries 23 Ultra-Lightweight; and Ultra-Lightweight Nanocomposite Foams and Sandwich Structures for Space Structure Applications
    corecore