38,321 research outputs found
Differentiable SAR Renderer and SAR Target Reconstruction
Forward modeling of wave scattering and radar imaging mechanisms is the key
to information extraction from synthetic aperture radar (SAR) images. Like
inverse graphics in optical domain, an inherently-integrated forward-inverse
approach would be promising for SAR advanced information retrieval and target
reconstruction. This paper presents such an attempt to the inverse graphics for
SAR imagery. A differentiable SAR renderer (DSR) is developed which
reformulates the mapping and projection algorithm of SAR imaging mechanism in
the differentiable form of probability maps. First-order gradients of the
proposed DSR are then analytically derived which can be back-propagated from
rendered image/silhouette to the target geometry and scattering attributes. A
3D inverse target reconstruction algorithm from SAR images is devised. Several
simulation and reconstruction experiments are conducted, including targets with
and without background, using both synthesized data or real measured inverse
SAR (ISAR) data by ground radar. Results demonstrate the efficacy of the
proposed DSR and its inverse approach
A Framework for SAR-Optical Stereogrammetry over Urban Areas
Currently, numerous remote sensing satellites provide a huge volume of
diverse earth observation data. As these data show different features regarding
resolution, accuracy, coverage, and spectral imaging ability, fusion techniques
are required to integrate the different properties of each sensor and produce
useful information. For example, synthetic aperture radar (SAR) data can be
fused with optical imagery to produce 3D information using stereogrammetric
methods. The main focus of this study is to investigate the possibility of
applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical
image pairs. For this purpose, the applicability of semi-global matching is
investigated in this unconventional multi-sensor setting. To support the image
matching by reducing the search space and accelerating the identification of
correct, reliable matches, the possibility of establishing an epipolarity
constraint for VHR SAR-optical image pairs is investigated as well. In
addition, it is shown that the absolute geolocation accuracy of VHR optical
imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be
improved by a multi-sensor block adjustment formulation based on rational
polynomial coefficients. Finally, the feasibility of generating point clouds
with a median accuracy of about 2m is demonstrated and confirms the potential
of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please
go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec
MVF-Net: Multi-View 3D Face Morphable Model Regression
We address the problem of recovering the 3D geometry of a human face from a
set of facial images in multiple views. While recent studies have shown
impressive progress in 3D Morphable Model (3DMM) based facial reconstruction,
the settings are mostly restricted to a single view. There is an inherent
drawback in the single-view setting: the lack of reliable 3D constraints can
cause unresolvable ambiguities. We in this paper explore 3DMM-based shape
recovery in a different setting, where a set of multi-view facial images are
given as input. A novel approach is proposed to regress 3DMM parameters from
multi-view inputs with an end-to-end trainable Convolutional Neural Network
(CNN). Multiview geometric constraints are incorporated into the network by
establishing dense correspondences between different views leveraging a novel
self-supervised view alignment loss. The main ingredient of the view alignment
loss is a differentiable dense optical flow estimator that can backpropagate
the alignment errors between an input view and a synthetic rendering from
another input view, which is projected to the target view through the 3D shape
to be inferred. Through minimizing the view alignment loss, better 3D shapes
can be recovered such that the synthetic projections from one view to another
can better align with the observed image. Extensive experiments demonstrate the
superiority of the proposed method over other 3DMM methods.Comment: 2019 Conference on Computer Vision and Pattern Recognitio
Recommended from our members
Volumetric Calibration Refinement using masked back projection and image correlation superposition
This paper deals with a new, reconstruction based, approach of refining a volumetric calibration. The technique is based on a 2D cross-correlation between particle images on the sensor plane with a planar back projection from a tomographic reconstruction in the same sensor plane to determine potential disparities between the initial camera calibration and the measurement. Additive superposition of the correlation maps from different sets or particle images allows reducing the influence of noise and ghost particles such that the systematic errors in the calibration can be corrected. The different sections describe the theory, the principle processing steps and the convergence of the procedure. Furthermore, the concept is proven by simulating the entire process of the measurement chain, with the help of a synthetic comparison. The results show that disparities of over 9 pixels could be corrected to an average of below 0.1 pixels during the refinement steps. Finally, the technique demonstrates itĀ“s potential to measured data, where the numbers of outliers in the raw results are reduced after the volumetric calibration refinement
Semiautomated Skeletonization of the Pulmonary Arterial Tree in Micro-CT Images
We present a simple and robust approach that utilizes planar images at different angular rotations combined with unfiltered back-projection to locate the central axes of the pulmonary arterial tree. Three-dimensional points are selected interactively by the user. The computer calculates a sub- volume unfiltered back-projection orthogonal to the vector connecting the two points and centered on the first point. Because more x-rays are absorbed at the thickest portion of the vessel, in the unfiltered back-projection, the darkest pixel is assumed to be the center of the vessel. The computer replaces this point with the newly computer-calculated point. A second back-projection is calculated around the original point orthogonal to a vector connecting the newly-calculated first point and user-determined second point. The darkest pixel within the reconstruction is determined. The computer then replaces the second point with the XYZ coordinates of the darkest pixel within this second reconstruction. Following a vector based on a moving average of previously determined 3- dimensional points along the vessel\u27s axis, the computer continues this skeletonization process until stopped by the user. The computer estimates the vessel diameter along the set of previously determined points using a method similar to the full width-half max algorithm. On all subsequent vessels, the process works the same way except that at each point, distances between the current point and all previously determined points along different vessels are determined. If the difference is less than the previously estimated diameter, the vessels are assumed to branch. This user/computer interaction continues until the vascular tree has been skeletonized
Capturing Panoramic Depth Images with a Single Standard Camera
In this paper we present a panoramic depth imaging system. The system is mosaic-based which means that we use a single rotating camera and assemble the captured images in a mosaic. Due to a setoff of the cameraās optical center from the rotational center of the system we are able to capture the motion parallax effect which enables the stereo reconstruction. The camera is rotating on a circular path with the step deļ¬ned by an angle equivalent to one column of the captured image. The equation for depth estimation can be easily extracted from system geometry. To ļ¬nd the corresponding points on a stereo pair of panoramic images the epipolar geometry needs to be determined. It can be shown that the epipolar geometry is very simple if we are doing the reconstruction based on a symmetric pair of stereo panoramic images. We get a symmetric pair of stereo panoramic images when we take symmetric columns on the left and on the right side from the captured image center column. Epipolar lines of the symmetrical pair of panoramic images are image rows. We focused mainly on the system analysis. The system performs well in the reconstruction of small indoor spaces
- ā¦