20 research outputs found
Bayesian Based Unrolling for Reconstruction and Super-resolution of Single-Photon Lidar Systems
Deploying 3D single-photon Lidar imaging in real world applications faces
several challenges due to imaging in high noise environments and with sensors
having limited resolution. This paper presents a deep learning algorithm based
on unrolling a Bayesian model for the reconstruction and super-resolution of 3D
single-photon Lidar. The resulting algorithm benefits from the advantages of
both statistical and learning based frameworks, providing best estimates with
improved network interpretability. Compared to existing learning-based
solutions, the proposed architecture requires a reduced number of trainable
parameters, is more robust to noise and mismodelling of the system impulse
response function, and provides richer information about the estimates
including uncertainty measures. Results on synthetic and real data show
competitive results regarding the quality of the inference and computational
complexity when compared to state-of-the-art algorithms. This short paper is
based on contributions published in [1] and [2].Comment: Presented in ISCS2
Shape from Projections via Differentiable Forward Projector for Computed Tomography
In computed tomography, the reconstruction is typically obtained on a voxel
grid. In this work, however, we propose a mesh-based reconstruction method. For
tomographic problems, 3D meshes have mostly been studied to simulate data
acquisition, but not for reconstruction, for which a 3D mesh means the inverse
process of estimating shapes from projections. In this paper, we propose a
differentiable forward model for 3D meshes that bridge the gap between the
forward model for 3D surfaces and optimization. We view the forward projection
as a rendering process, and make it differentiable by extending recent work in
differentiable rendering. We use the proposed forward model to reconstruct 3D
shapes directly from projections. Experimental results for single-object
problems show that the proposed method outperforms traditional voxel-based
methods on noisy simulated data. We also apply the proposed method on electron
tomography images of nanoparticles to demonstrate the applicability of the
method on real data
A Bayesian Based Deep Unrolling Algorithm for Single-Photon Lidar Systems
Deploying 3D single-photon Lidar imaging in real world applications faces
multiple challenges including imaging in high noise environments. Several
algorithms have been proposed to address these issues based on statistical or
learning-based frameworks. Statistical methods provide rich information about
the inferred parameters but are limited by the assumed model correlation
structures, while deep learning methods show state-of-the-art performance but
limited inference guarantees, preventing their extended use in critical
applications. This paper unrolls a statistical Bayesian algorithm into a new
deep learning architecture for robust image reconstruction from single-photon
Lidar data, i.e., the algorithm's iterative steps are converted into neural
network layers. The resulting algorithm benefits from the advantages of both
statistical and learning based frameworks, providing best estimates with
improved network interpretability. Compared to existing learning-based
solutions, the proposed architecture requires a reduced number of trainable
parameters, is more robust to noise and mismodelling effects, and provides
richer information about the estimates including uncertainty measures. Results
on synthetic and real data show competitive results regarding the quality of
the inference and computational complexity when compared to state-of-the-art
algorithms