8 research outputs found
Metasurface-enhanced Light Detection and Ranging Technology
Deploying advanced imaging solutions to robotic and autonomous systems by
mimicking human vision requires simultaneous acquisition of multiple fields of
views, named the peripheral and fovea regions. Low-resolution peripheral field
provides coarse scene exploration to direct the eye to focus to a highly
resolved fovea region for sharp imaging. Among 3D computer vision techniques,
Light Detection and Ranging (LiDAR) is currently considered at the industrial
level for robotic vision. LiDAR is an imaging technique that monitors pulses of
light at optical frequencies to sense the space and to recover
three-dimensional ranging information. Notwithstanding the efforts on LiDAR
integration and optimization, commercially available devices have slow frame
rate and low image resolution, notably limited by the performance of mechanical
or slow solid-state deflection systems. Metasurfaces (MS) are versatile optical
components that can distribute the optical power in desired regions of space.
Here, we report on an advanced LiDAR technology that uses ultrafast low FoV
deflectors cascaded with large area metasurfaces to achieve large FoV and
simultaneous peripheral and central imaging zones. This technology achieves MHz
frame rate for 2D imaging, and up to KHz for 3D imaging, with extremely large
FoV (up to 150{\deg}deg. on both vertical and horizontal scanning axes). The
use of this disruptive LiDAR technology with advanced learning algorithms
offers perspectives to improve further the perception capabilities and
decision-making process of autonomous vehicles and robotic systems.Comment: 25pages, 18 figures. Including supplementary material
Desarrollo de una interfaz común para la configuración y adquisición de datos de cámaras de tiempo de vuelo
El objetivo de este trabajo es el desarrollo de una interfaz de programación de aplicaciones (API) común, que permita la configuración y adquisición de información proveniente de diferentes sensores de profundidad, sin necesidad de conocer las funciones propias de la cámara.
Las cámaras de tiempo de vuelo (ToF) permiten obtener información de profundidad del entorno con una precisión elevada, analizando los cambios que sufre una señal luminosa emitida cuando esta se refleja en los diferentes objetos. Sin embargo, cada fabricante proporciona sus propias funciones y aplicaciones. La API desarrollada en este trabajo permite el acceso a la configuración de la cámara y adquisición de la información sin necesidad de conocer las librerías del fabricante, permitiendo, además, incorporar nuevas cámaras de forma sencilla.The aim of this final degree thesis is the development of a common programming interface (API), which allows the image acquisition from different depth cameras, without the need to know the functions of the camera.
Time of flight cameras (ToF) allow to obtain depth information of the environment with a high precision, analyzing the changes suffered by an infrared signal emitted when the light is reflected from different objects. However, each manufacturer provides its own functions and applications. The API developed in this assignment allows access to the configuration of the camera and acquisition of information without the need to know the manufacturer's libraries functions, also allowing the incorporation of new cameras in a simple way.Grado en Ingeniería en Electrónica y Automática Industria
Time-of-flight CMOS イメージセンサによる高精度・高速距離イメージングに関する研究
Tohoku University博士(工学)thesi
CMOS SPAD-based image sensor for single photon counting and time of flight imaging
The facility to capture the arrival of a single photon, is the fundamental limit to the detection of quantised
electromagnetic radiation. An image sensor capable of capturing a picture with this ultimate optical and
temporal precision is the pinnacle of photo-sensing. The creation of high spatial resolution, single photon
sensitive, and time-resolved image sensors in complementary metal oxide semiconductor (CMOS) technology
offers numerous benefits in a wide field of applications. These CMOS devices will be suitable to replace high
sensitivity charge-coupled device (CCD) technology (electron-multiplied or electron bombarded) with
significantly lower cost and comparable performance in low light or high speed scenarios. For example, with
temporal resolution in the order of nano and picoseconds, detailed three-dimensional (3D) pictures can be
formed by measuring the time of flight (TOF) of a light pulse. High frame rate imaging of single photons can
yield new capabilities in super-resolution microscopy. Also, the imaging of quantum effects such as the
entanglement of photons may be realised.
The goal of this research project is the development of such an image sensor by exploiting single photon
avalanche diodes (SPAD) in advanced imaging-specific 130nm front side illuminated (FSI) CMOS technology.
SPADs have three key combined advantages over other imaging technologies: single photon sensitivity,
picosecond temporal resolution and the facility to be integrated in standard CMOS technology. Analogue
techniques are employed to create an efficient and compact imager that is scalable to mega-pixel arrays. A
SPAD-based image sensor is described with 320 by 240 pixels at a pitch of 8μm and an optical efficiency or
fill-factor of 26.8%. Each pixel comprises a SPAD with a hybrid analogue counting and memory circuit that
makes novel use of a low-power charge transfer amplifier. Global shutter single photon counting images are
captured. These exhibit photon shot noise limited statistics with unprecedented low input-referred noise at an
equivalent of 0.06 electrons.
The CMOS image sensor (CIS) trends of shrinking pixels, increasing array sizes, decreasing read noise, fast
readout and oversampled image formation are projected towards the formation of binary single photon imagers
or quanta image sensors (QIS). In a binary digital image capture mode, the image sensor offers a look-ahead to
the properties and performance of future QISs with 20,000 binary frames per second readout with a bit error
rate of 1.7 x 10-3. The bit density, or cumulative binary intensity, against exposure performance of this image
sensor is in the shape of the famous Hurter and Driffield densitometry curves of photographic film.
Oversampled time-gated binary image capture is demonstrated, capturing 3D TOF images with 3.8cm
precision in a 60cm range
Miniature high dynamic range time-resolved CMOS SPAD image sensors
Since their integration in complementary metal oxide (CMOS) semiconductor technology in 2003,
single photon avalanche diodes (SPADs) have inspired a new era of low cost high integration
quantum-level image sensors. Their unique feature of discerning single photon detections, their ability
to retain temporal information on every collected photon and their amenability to high speed image
sensor architectures makes them prime candidates for low light and time-resolved applications.
From the biomedical field of fluorescence lifetime imaging microscopy (FLIM) to extreme physical
phenomena such as quantum entanglement, all the way to time of flight (ToF) consumer applications
such as gesture recognition and more recently automotive light detection and ranging (LIDAR), huge
steps in detector and sensor architectures have been made to address the design challenges of pixel
sensitivity and functionality trade-off, scalability and handling of large data rates.
The goal of this research is to explore the hypothesis that given the state of the art CMOS nodes and
fabrication technologies, it is possible to design miniature SPAD image sensors for time-resolved
applications with a small pixel pitch while maintaining both sensitivity and built -in functionality.
Three key approaches are pursued to that purpose: leveraging the innate area reduction of logic gates
and finer design rules of advanced CMOS nodes to balance the pixel’s fill factor and processing
capability, smarter pixel designs with configurable functionality and novel system architectures that
lift the processing burden off the pixel array and mediate data flow.
Two pathfinder SPAD image sensors were designed and fabricated: a 96 × 40 planar front side
illuminated (FSI) sensor with 66% fill factor at 8.25μm pixel pitch in an industrialised 40nm process
and a 128 × 120 3D-stacked backside illuminated (BSI) sensor with 45% fill factor at 7.83μm pixel
pitch. Both designs rely on a digital, configurable, 12-bit ripple counter pixel allowing for time-gated
shot noise limited photon counting. The FSI sensor was operated as a quanta image sensor (QIS)
achieving an extended dynamic range in excess of 100dB, utilising triple exposure windows and in-pixel
data compression which reduces data rates by a factor of 3.75×. The stacked sensor is the first
demonstration of a wafer scale SPAD imaging array with a 1-to-1 hybrid bond connection.
Characterisation results of the detector and sensor performance are presented.
Two other time-resolved 3D-stacked BSI SPAD image sensor architectures are proposed. The first is a
fully integrated 5-wire interface system on chip (SoC), with built-in power management and off-focal
plane data processing and storage for high dynamic range as well as autonomous video rate operation.
Preliminary images and bring-up results of the fabricated 2mm² sensor are shown. The second is a
highly configurable design capable of simultaneous multi-bit oversampled imaging and programmable
region of interest (ROI) time correlated single photon counting (TCSPC) with on-chip histogram
generation. The 6.48μm pitch array has been submitted for fabrication. In-depth design details of both
architectures are discussed