41 research outputs found

    Frequency-modulated continuous-wave LiDAR compressive depth-mapping

    Get PDF
    We present an inexpensive architecture for converting a frequency-modulated continuous-wave LiDAR system into a compressive-sensing based depth-mapping camera. Instead of raster scanning to obtain depth-maps, compressive sensing is used to significantly reduce the number of measurements. Ideally, our approach requires two difference detectors. % but can operate with only one at the cost of doubling the number of measurments. Due to the large flux entering the detectors, the signal amplification from heterodyne detection, and the effects of background subtraction from compressive sensing, the system can obtain higher signal-to-noise ratios over detector-array based schemes while scanning a scene faster than is possible through raster-scanning. %Moreover, we show how a single total-variation minimization and two fast least-squares minimizations, instead of a single complex nonlinear minimization, can efficiently recover high-resolution depth-maps with minimal computational overhead. Moreover, by efficiently storing only 2m2m data points from m<nm<n measurements of an nn pixel scene, we can easily extract depths by solving only two linear equations with efficient convex-optimization methods

    An overview of lidar imaging systems for autonomous vehicles

    Get PDF
    Lidar imaging systems are one of the hottest topics in the optronics industry. The need to sense the surroundings of every autonomous vehicle has pushed forward a race dedicated to deciding the final solution to be implemented. However, the diversity of state-of-the-art approaches to the solution brings a large uncertainty on the decision of the dominant final solution. Furthermore, the performance data of each approach often arise from different manufacturers and developers, which usually have some interest in the dispute. Within this paper, we intend to overcome the situation by providing an introductory, neutral overview of the technology linked to lidar imaging systems for autonomous vehicles, and its current state of development. We start with the main single-point measurement principles utilized, which then are combined with different imaging strategies, also described in the paper. An overview of the features of the light sources and photodetectors specific to lidar imaging systems most frequently used in practice is also presented. Finally, a brief section on pending issues for lidar development in autonomous vehicles has been included, in order to present some of the problems which still need to be solved before implementation may be considered as final. The reader is provided with a detailed bibliography containing both relevant books and state-of-the-art papers for further progress in the subject.Peer ReviewedPostprint (published version

    A solid-state block-based pulsed laser illuminator for SPAD-based time-of-flight 3-D range imaging

    Get PDF
    A laser diode illuminator for single photon detection (SPAD) based pulsed time-of-flight 3-D range imaging is presented. The illuminator supports the block-based illumination scheme. It consists of a 16-element custom designed common anode quantum-well (QW) laser diode bar working in the enhanced gain switching regime and lasing at ~810nm. The laser diode elements are separately addressable and driven with GaN drivers producing 5...10A/~2ns current pulses from a supply voltage of 90V. Cylindrical optics is used to produce a total illumination field-of-view of 40ºx10º (FWHM) in 16 separately addressable blocks. With a laser pulsing frequency of 256kHz and laser pulse energy of ~8.5nJ, the average optical illumination power of the transmitter is 2.2m

    Guided direct time-of-flight Lidar for self-driving vehicles

    Get PDF
    Self-driving vehicles demand efficient and reliable depth-sensing technologies. Lidar, with its capacity for long-distance, high-precision measurement, is a crucial component in this pursuit. However, conventional mechanical scanning implementations suffer from reliability, cost, and frame rate limitations. Solid-state lidar solutions have emerged as a promising alternative, but the vast amount of photon data processed and stored using conventional direct time-of-flight (dToF) prevents long-distance sensing unless power-intensive partial histogram approaches are used. This research introduces a pioneering ‘guided’ dToF approach, harnessing external guidance from other onboard sensors to narrow down the depth search space for a power and data-efficient solution. This approach centres around a dToF sensor in which the exposed time widow of independent pixels can be dynamically adjusted. A pair of vision cameras are used in this demonstrator to provide the guiding depth estimates. The implemented guided dToF demonstrator successfully captures a dynamic outdoor scene at 3 fps with distances up to 75 m. Compared to a conventional full histogram approach, on-chip data is reduced by over 25 times, while the total laser cycles in each frame are reduced by at least 6 times compared to any partial histogram approach. The capability of guided dToF to mitigate multipath reflections is also demonstrated. For self-driving vehicles where a wealth of sensor data is already available, guided dToF opens new possibilities for efficient solid-state lidar

    Single-photon counting lidar for long-range three-dimensional imaging

    Get PDF
    Single-photon time-of-flight (ToF) distance ranging lidar is a candidate technology for high-resolution depth imaging for use, for example, from airborne platforms. This approach enables low average power pulsed laser sources to be used while allowing imaging from significantly longer target ranges compared to analogue imaging. The recent availability of Geiger-mode (Gm) arrays has revolutionised photon-counting lidar as they provide single-photon full-frame data in short acquisition times. This thesis presents work on the opto-mechanical design, tolerance analysis and performance evaluation of a re-configurable single-photon counting lidar which can accommodate either a single-element single-photon avalanche photodiode (SPAD) or a 32 × 32 Gm-array. By incorporating an inter-changeable lens, the two configurations were designed to provide identical pixel resolution for both the single-pixel system and the Gm-array configurations in order to permit a performance comparison to be conducted. This is the first time that such a comparison has been reported and the lidar is one of the earliest to assess the performance of a short-wave infra-red (SWIR) Gm-array. Both detection configurations used InGaAs/InP SPAD detectors and operated at a wavelength of 1550 nm. The main benefits of operating within the SWIR band include reduced solar background, lower atmospheric loss, improved covertness, as well as improved laser eye-safety thresholds. The system estimates target range by measuring the ToF using time-correlated single-photon counting (TCSPC) and was used to produce high-resolution three-dimensional images of targets at between 800 m and 10.5 km range. The single-element system has the potential to provide improved depth resolution over the array due to a smaller timing jitter but requires longer acquisition times due to the need for two-dimensional scanning. The acquisition time of the array configuration can be up to three orders of magnitude faster than the single-element configuration but requires significantly higher average laser power levels. The Gm-array provided a simultaneous estimation of angle-of-arrival and intensity fluctuations from which a comparable strength of atmospheric turbulence could be measured. This demonstrated that Gm-arrays provide a new way of high-speed turbulence measurement with time intervals much shorter than those offered by existing scintillometers

    Computational Imaging and Artificial Intelligence: The Next Revolution of Mobile Vision

    Full text link
    Signal capture stands in the forefront to perceive and understand the environment and thus imaging plays the pivotal role in mobile vision. Recent explosive progresses in Artificial Intelligence (AI) have shown great potential to develop advanced mobile platforms with new imaging devices. Traditional imaging systems based on the "capturing images first and processing afterwards" mechanism cannot meet this unprecedented demand. Differently, Computational Imaging (CI) systems are designed to capture high-dimensional data in an encoded manner to provide more information for mobile vision systems.Thanks to AI, CI can now be used in real systems by integrating deep learning algorithms into the mobile vision platform to achieve the closed loop of intelligent acquisition, processing and decision making, thus leading to the next revolution of mobile vision.Starting from the history of mobile vision using digital cameras, this work first introduces the advances of CI in diverse applications and then conducts a comprehensive review of current research topics combining CI and AI. Motivated by the fact that most existing studies only loosely connect CI and AI (usually using AI to improve the performance of CI and only limited works have deeply connected them), in this work, we propose a framework to deeply integrate CI and AI by using the example of self-driving vehicles with high-speed communication, edge computing and traffic planning. Finally, we outlook the future of CI plus AI by investigating new materials, brain science and new computing techniques to shed light on new directions of mobile vision systems

    Sound Processing for Autonomous Driving

    Get PDF
    Nowadays, a variety of intelligent systems for autonomous driving have been developed, which have already shown a very high level of capability. One of the prerequisites for autonomous driving is an accurate and reliable representation of the environment around the vehicle. Current systems rely on cameras, RADAR, and LiDAR to capture the visual environment and to locate and track other traffic participants. Human drivers, in addition to vision, have hearing and use a lot of auditory information to understand the environment in addition to visual cues. In this thesis, we present the sound signal processing system for auditory based environment representation. Sound propagation is less dependent on occlusion than all other types of sensors and in some situations is less sensitive to different types of weather conditions such as snow, ice, fog or rain. Various audio processing algorithms provide the detection and classification of different audio signals specific to certain types of vehicles, as well as localization. First, the ambient sound is classified into fourteen major categories consisting of traffic objects and actions performed. Additionally, the classification of three specific types of emergency vehicles sirens is provided. Secondly, each object is localized using a combined localization algorithm based on time difference of arrival and amplitude. The system is evaluated on real data with a focus on reliable detection and accurate localization of emergency vehicles. On the third stage the possibility of visualizing the sound source on the image from the autonomous vehicle camera system is provided. For this purpose, a method for camera to microphones calibration has been developed. The presented approaches and methods have great potential to increase the accuracy of environment perception and, consequently, to improve the reliability and safety of autonomous driving systems in general

    Proof-of-concept of a single-point Time-of-Flight LiDAR system and guidelines towards integrated high-accuracy timing, advanced polarization sensing and scanning with a MEMS micromirror

    Get PDF
    Dissertação de mestrado integrado em Engenharia Física (área de especialização em Dispositivos, Microssistemas e Nanotecnologias)The core focus of the work reported herein is the fulfillment of a functional Light Detection and Ranging (LiDAR) sensor to validate the direct Time-of-Flight (ToF) ranging concept and the acquisition of critical knowledge regarding pivotal aspects jeopardizing the sensor’s performance, for forthcoming improvements aiming a realistic sensor targeted towards automotive applications. Hereupon, the ToF LiDAR system is implemented through an architecture encompassing both optical and electronical functions and is subsequently characterized under a sequence of test procedures usually applied in benchmarking of LiDAR sensors. The design employs a hybrid edge-emitting laser diode (pulsed at 6kHz, 46ns temporal FWHM, 7ns rise-time; 919nm wavelength with 5nm FWHM), a PIN photodiode to detect the back-reflected radiation, a transamplification stage and two Time-to-Digital Converters (TDCs), with leading-edge discrimination electronics to mark the transit time between emission and detection events. Furthermore, a flexible modular design is adopted using two separate Printed Circuit Boards (PCBs), comprising the transmitter (TX) and the receiver (RX), i.e. detection and signal processing. The overall output beam divergence is 0.4º×1º and an optical peak power of 60W (87% overall throughput) is realized. The sensor is tested indoors from 0.56 to 4.42 meters, and the distance is directly estimated from the pulses transit time. The precision within these working distances ranges from 4cm to 7cm, reflected in a Signal-to-Noise Ratio (SNR) between 12dB and 18dB. The design requires a calibration procedure to correct systematic errors in the range measurements, induced by two sources: the timing offset due to architecture-inherent differences in the optoelectronic paths and a supplementary bias resulting from the design, which renders an intensity dependence and is denoted time-walk. The calibrated system achieves a mean accuracy of 1cm. Two distinct target materials are used for characterization and performance evaluation: a metallic automotive paint and a diffuse material. This selection is representative of two extremes of actual LiDAR applications. The optical and electronic characterization is thoroughly detailed, including the recognition of a good agreement between empirical observations and simulations in ZEMAX, for optical design, and in a SPICE software, for the electrical subsystem. The foremost meaningful limitation of the implemented design is identified as an outcome of the leading-edge discrimination. A proposal for a Constant Fraction Discriminator addressing sub-millimetric accuracy is provided to replace the previous signal processing element. This modification is mandatory to virtually eliminate the aforementioned systematic bias in range sensing due to the intensity dependency. A further crucial addition is a scanning mechanism to supply the required Field-of-View (FOV) for automotive usage. The opto-electromechanical guidelines to interface a MEMS micromirror scanner, achieving a 46º×17º FOV, with the LiDAR sensor are furnished. Ultimately, a proof-of-principle to the use of polarization in material classification for advanced processing is carried out, aiming to complement the ToF measurements. The original design is modified to include a variable wave retarder, allowing the simultaneous detection of orthogonal linear polarization states using a single detector. The material classification with polarization sensing is tested with the previously referred materials culminating in an 87% and 11% degree of linear polarization retention from the metallic paint and the diffuse material, respectively, computed by Stokes parameters calculus. The procedure was independently validated under the same conditions with a micro-polarizer camera (92% and 13% polarization retention).O intuito primordial do trabalho reportado no presente documento é o desenvolvimento de um sensor LiDAR funcional, que permita validar o conceito de medição direta do tempo de voo de pulsos óticos para a estimativa de distância, e a aquisição de conhecimento crítico respeitante a aspetos fundamentais que prejudicam a performance do sensor, ambicionando melhorias futuras para um sensor endereçado para aplicações automóveis. Destarte, o sistema LiDAR é implementado através de uma arquitetura que engloba tanto funções óticas como eletrónicas, sendo posteriormente caracterizado através de uma sequência de testes experimentais comumente aplicáveis em benchmarking de sensores LiDAR. O design tira partido de um díodo de laser híbrido (pulsado a 6kHz, largura temporal de 46ns; comprimento de onda de pico de 919nm e largura espetral de 5nm), um fotodíodo PIN para detetar a radiação refletida, um andar de transamplificação e dois conversores tempo-digital, com discriminação temporal com threshold constante para marcar o tempo de trânsito entre emissão e receção. Ademais, um design modular flexível é adotado através de duas PCBs independentes, compondo o transmissor e o recetor (deteção e processamento de sinal). A divergência global do feixe emitido para o ambiente circundante é 0.4º×1º, apresentando uma potência ótica de pico de 60W (eficiência de 87% na transmissão). O sensor é testado em ambiente fechado, entre 0.56 e 4.42 metros. A precisão dentro das distâncias de trabalho varia entre 4cm e 7cm, o que se reflete numa razão sinal-ruído entre 12dB e 18dB. O design requer calibração para corrigir erros sistemáticos nas distâncias adquiridas devido a duas fontes: o desvio no ToF devido a diferenças nos percursos optoeletrónicos, inerentes à arquitetura, e uma dependência adicional da intensidade do sinal refletido, induzida pela técnica de discriminação implementada e denotada time-walk. A exatidão do sistema pós-calibração perfaz um valor médio de 1cm. Dois alvos distintos são utilizados durante a fase de caraterização e avaliação performativa: uma tinta metálica aplicada em revestimentos de automóveis e um material difusor. Esta seleção é representativa de dois cenários extremos em aplicações reais do LiDAR. A caraterização dos subsistemas ótico e eletrónico é minuciosamente detalhada, incluindo a constatação de uma boa concordância entre observações empíricas e simulações óticas em ZEMAX e elétricas num software SPICE. O principal elemento limitante do design implementado é identificado como sendo a técnica de discriminação adotada. Por conseguinte, é proposta a substituição do anterior bloco por uma técnica de discriminação a uma fração constante do pulso de retorno, com exatidões da ordem sub-milimétrica. Esta modificação é imperativa para eliminar o offset sistemático nas medidas de distância, decorrente da dependência da intensidade do sinal. Uma outra inclusão de extrema relevância é um mecanismo de varrimento que assegura o cumprimento dos requisitos de campo de visão para aplicações automóveis. As diretrizes para a integração de um micro-espelho no sensor concebido são providenciadas, permitindo atingir um campo de visão de 46º×17º. Conclusivamente, é feita uma prova de princípio para a utilização da polarização como complemento das medições do tempo de voo, de modo a suportar a classificação de materiais em processamento avançado. A arquitetura original é modificada para incluir uma lâmina de atraso variável, permitindo a deteção de estados de polarização ortogonais com um único fotodetetor. A classificação de materiais através da aferição do estado de polarização da luz refletida é testada para os materiais supramencionados, culminando numa retenção de polarização de 87% (tinta metálica) e 11% (difusor), calculados através dos parâmetros de Stokes. O procedimento é independentemente validado com uma câmara polarimétrica nas mesmas condições (retenção de 92% e 13%)
    corecore