3,847 research outputs found
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetr
Transformer-based Multimodal Change Detection with Multitask Consistency Constraints
Change detection plays a fundamental role in Earth observation for analyzing
temporal iterations over time. However, recent studies have largely neglected
the utilization of multimodal data that presents significant practical and
technical advantages compared to single-modal approaches. This research focuses
on leveraging digital surface model (DSM) data and aerial images captured at
different times for detecting change beyond 2D. We observe that the current
change detection methods struggle with the multitask conflicts between semantic
and height change detection tasks. To address this challenge, we propose an
efficient Transformer-based network that learns shared representation between
cross-dimensional inputs through cross-attention. It adopts a consistency
constraint to establish the multimodal relationship, which involves obtaining
pseudo change through height change thresholding and minimizing the difference
between semantic and pseudo change within their overlapping regions. A
DSM-to-image multimodal dataset encompassing three cities in the Netherlands
was constructed. It lays a new foundation for beyond-2D change detection from
cross-dimensional inputs. Compared to five state-of-the-art change detection
methods, our model demonstrates consistent multitask superiority in terms of
semantic and height change detection. Furthermore, the consistency strategy can
be seamlessly adapted to the other methods, yielding promising improvements
Guided patch-wise nonlocal SAR despeckling
We propose a new method for SAR image despeckling which leverages information
drawn from co-registered optical imagery. Filtering is performed by plain
patch-wise nonlocal means, operating exclusively on SAR data. However, the
filtering weights are computed by taking into account also the optical guide,
which is much cleaner than the SAR data, and hence more discriminative. To
avoid injecting optical-domain information into the filtered image, a
SAR-domain statistical test is preliminarily performed to reject right away any
risky predictor. Experiments on two SAR-optical datasets prove the proposed
method to suppress very effectively the speckle, preserving structural details,
and without introducing visible filtering artifacts. Overall, the proposed
method compares favourably with all state-of-the-art despeckling filters, and
also with our own previous optical-guided filter
Multitemporal Very High Resolution from Space: Outcome of the 2016 IEEE GRSS Data Fusion Contest
In this paper, the scientific outcomes of the 2016 Data Fusion Contest organized by the Image Analysis and Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society are discussed. The 2016 Contest was an open topic competition based on a multitemporal and multimodal dataset, which included a temporal pair of very high resolution panchromatic and multispectral Deimos-2 images and a video captured by the Iris camera on-board the International Space Station. The problems addressed and the techniques proposed by the participants to the Contest spanned across a rather broad range of topics, and mixed ideas and methodologies from the remote sensing, video processing, and computer vision. In particular, the winning team developed a deep learning method to jointly address spatial scene labeling and temporal activity modeling using the available image and video data. The second place team proposed a random field model to simultaneously perform coregistration of multitemporal data, semantic segmentation, and change detection. The methodological key ideas of both these approaches and the main results of the corresponding experimental validation are discussed in this paper
Three-dimensional imaging with multiple degrees of freedom using data fusion
This paper presents an overview of research work
and some novel strategies and results on using data fusion in
3-D imaging when using multiple information sources. We examine
a variety of approaches and applications such as 3-D
imaging integrated with polarimetric and multispectral imaging,
low levels of photon flux for photon-counting 3-D imaging,
and image fusion in both multiwavelength 3-D digital holography
and 3-D integral imaging. Results demonstrate the benefits
data fusion provides for different purposes, including visualization
enhancement under different conditions, and 3-D reconstruction
quality improvement
- …