11 research outputs found
Fast online 3D reconstruction of dynamic scenes from individual single-photon detection events
In this paper, we present an algorithm for online 3D reconstruction of
dynamic scenes using individual times of arrival (ToA) of photons recorded by
single-photon detector arrays. One of the main challenges in 3D imaging using
single-photon Lidar is the integration time required to build ToA histograms
and reconstruct reliable 3D profiles in the presence of non-negligible ambient
illumination. This long integration time also prevents the analysis of rapid
dynamic scenes using existing techniques. We propose a new method which does
not rely on the construction of ToA histograms but allows, for the first time,
individual detection events to be processed online, in a parallel manner in
different pixels, while accounting for the intrinsic spatiotemporal structure
of dynamic scenes. Adopting a Bayesian approach, a Bayesian model is
constructed to capture the dynamics of the 3D profile and an approximate
inference scheme based on assumed density filtering is proposed, yielding a
fast and robust reconstruction algorithm able to process efficiently thousands
to millions of frames, as usually recorded using single-photon detectors. The
performance of the proposed method, able to process hundreds of frames per
second, is assessed using a series of experiments conducted with static and
dynamic 3D scenes and the results obtained pave the way to a new family of
real-time 3D reconstruction solutions
Robust 3D Reconstruction of Dynamic Scenes From Single-Photon Lidar Using Beta-Divergences
In this paper, we present a new algorithm for fast, online 3D reconstruction
of dynamic scenes using times of arrival of photons recorded by single-photon
detector arrays. One of the main challenges in 3D imaging using single-photon
lidar in practical applications is the presence of strong ambient illumination
which corrupts the data and can jeopardize the detection of peaks/surface in
the signals. This background noise not only complicates the observation model
classically used for 3D reconstruction but also the estimation procedure which
requires iterative methods. In this work, we consider a new similarity measure
for robust depth estimation, which allows us to use a simple observation model
and a non-iterative estimation procedure while being robust to
mis-specification of the background illumination model. This choice leads to a
computationally attractive depth estimation procedure without significant
degradation of the reconstruction performance. This new depth estimation
procedure is coupled with a spatio-temporal model to capture the natural
correlation between neighboring pixels and successive frames for dynamic scene
analysis. The resulting online inference process is scalable and well suited
for parallel implementation. The benefits of the proposed method are
demonstrated through a series of experiments conducted with simulated and real
single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m
observed under extreme ambient illumination conditions.Comment: 12 page
A Sketching Framework for Reduced Data Transfer in Photon Counting Lidar
Single-photon lidar has become a prominent tool for depth imaging in recent
years. At the core of the technique, the depth of a target is measured by
constructing a histogram of time delays between emitted light pulses and
detected photon arrivals. A major data processing bottleneck arises on the
device when either the number of photons per pixel is large or the resolution
of the time stamp is fine, as both the space requirement and the complexity of
the image reconstruction algorithms scale with these parameters. We solve this
limiting bottleneck of existing lidar techniques by sampling the characteristic
function of the time of flight (ToF) model to build a compressive statistic, a
so-called sketch of the time delay distribution, which is sufficient to infer
the spatial distance and intensity of the object. The size of the sketch scales
with the degrees of freedom of the ToF model (number of objects) and not,
fundamentally, with the number of photons or the time stamp resolution.
Moreover, the sketch is highly amenable for on-chip online processing. We show
theoretically that the loss of information for compression is controlled and
the mean squared error of the inference quickly converges towards the optimal
Cram\'er-Rao bound (i.e. no loss of information) for modest sketch sizes. The
proposed compressed single-photon lidar framework is tested and evaluated on
real life datasets of complex scenes where it is shown that a compression rate
of up-to 150 is achievable in practice without sacrificing the overall
resolution of the reconstructed image.Comment: 16 pages, 20 figure
Imaging through obscurants using time-correlated single-photon counting in the short-wave infrared
Single-photon time-of-flight (ToF) light detection and ranging (LiDAR) systems have
emerged in recent years as a candidate technology for high-resolution depth imaging in
challenging environments, such as long-range imaging and imaging in scattering media.
This Thesis investigates the potential of two ToF single-photon depth imaging systems
based on the time-correlated single-photon (TCSPC) technique for imaging targets in
highly scattering environments. The high sensitivity and picosecond timing resolution
afforded by the TCSPC technique offers high-resolution depth profiling of remote targets
while maintaining low optical power levels. Both systems comprised a pulsed picosecond
laser source with an operating wavelength of 1550 nm, and employed InGaAs/InP SPAD
detectors. The main benefits of operating in the shortwave infrared (SWIR) band include
improved atmospheric transmission, reduced solar background, as well as increased laser
eye-safety thresholds over visible band sensors.
Firstly, a monostatic scanning transceiver unit was used in conjunction with a
single-element Peltier-cooled InGaAs/InP SPAD detector to attain sub-centimetre
resolution three-dimensional images of long-range targets obscured by camouflage
netting or in high levels of scattering media. Secondly, a bistatic system, which employed
a 32 × 32 pixel format InGaAs/InP SPAD array was used to obtain rapid depth profiles
of targets which were flood-illuminated by a higher power pulsed laser source. The
performance of this system was assessed in indoor and outdoor scenarios in the presence
of obscurants and high ambient background levels.
Bespoke image processing algorithms were developed to reconstruct both the depth and
intensity images for data with very low signal returns and short data acquisition times,
illustrating the practicality of TCSPC-based LiDAR systems for real-time image
acquisition in the SWIR wavelength region - even in the photon-starved regime.The Defence Science and Technology Laboratory ( Dstl) National PhD Schem
Bayesian image reconstruction and adaptive scene sampling in single-photon LiDAR imaging
Three-Dimensional multispectral Light Detection And Ranging (LiDAR) used
with time-correlated Single-Photon (SP) detection has emerged as a key imaging
modality for high-resolution depth imaging due to its high sensitivity and excellent surface-to-surface resolution. This allowed depth imaging through adversarial
conditions with a prime role in numerous applications. However, several practical
challenges currently limit the use of LiDAR in real-world conditions. Large data
volume constitutes a major challenge for multispectral SP-LiDAR imaging due to
the acquisition of millions of events per second that are usually gathered in large
histogram cubes. This challenge is more evident when the useful signal photons are
attenuated and the background noise is amplified as a result of imaging through a
scattering environment such as underwater or fog. Another limitation includes the
detection of multiple-surfaces-per pixel which usually occurs when imaging through
semi-transparent materials (e.g., windows, camouflage), or in long-range profiling.
This thesis proposes robust and fast computational solutions to improve the acquisition and processing of LiDAR data while measuring uncertainty on high-dimensional data. A smart task-based sampling framework
is proposed to improve the acquisition process and reduce data volume. In addition,
the processing was improved using a Bayesian approach to different types of inverse
problems (e.g. spectral classification, and scene reconstruction). The contributions
of this thesis enables fast and robust 3D reconstruction of complex scenes, paving
the way for the extensive use of single-photon imaging in real-world applications