5 research outputs found
Limited-angle tomographic reconstruction of dense layered objects by dynamical machine learning
Limited-angle tomography of strongly scattering quasi-transparent objects is
a challenging, highly ill-posed problem with practical implications in medical
and biological imaging, manufacturing, automation, and environmental and food
security. Regularizing priors are necessary to reduce artifacts by improving
the condition of such problems. Recently, it was shown that one effective way
to learn the priors for strongly scattering yet highly structured 3D objects,
e.g. layered and Manhattan, is by a static neural network [Goy et al, Proc.
Natl. Acad. Sci. 116, 19848-19856 (2019)]. Here, we present a radically
different approach where the collection of raw images from multiple angles is
viewed analogously to a dynamical system driven by the object-dependent forward
scattering operator. The sequence index in angle of illumination plays the role
of discrete time in the dynamical system analogy. Thus, the imaging problem
turns into a problem of nonlinear system identification, which also suggests
dynamical learning as better fit to regularize the reconstructions. We devised
a recurrent neural network (RNN) architecture with a novel split-convolutional
gated recurrent unit (SC-GRU) as the fundamental building block. Through
comprehensive comparison of several quantitative metrics, we show that the
dynamic method improves upon previous static approaches with fewer artifacts
and better overall reconstruction fidelity.Comment: 12 pages, 7 figures, 2 table
SyntCities: A Large Synthetic Remote Sensing Dataset for Disparity Estimation
Studies in the last years have proved the outstanding performance of deep learning for computer vision tasks in the remote sensing field, such as disparity estimation. However, available datasets mostly focus on close-range applications like autonomous driving or robot manipulation. To reduce the domain gap while training we present SyntCities, a synthetic dataset resembling the aerial imagery on urban areas. The pipeline used to render the images is based on 3-D modeling, which helps to avoid acquisition costs, provides subpixel accurate dense ground truth and simulates different illumination conditions. The dataset additionally provides multiclass semantic maps and can be converted to point cloud format to benefit a wider research community. We focus on the task of disparity estimation and evaluate the performance of the traditional semiglobal matching and state-of-the-art architectures, trained with SyntCities and other datasets, on real aerial and satellite images. A comparison with the widely used SceneFlow dataset is also presented. Strategies using a mixture of both real and synthetic samples are studied as well. Results show significant improvements in terms of accuracy for the disparity maps