115 research outputs found

    Potential of nonlocally filtered pursuit monostatic TanDEM-X data for coastline detection

    Full text link
    This article investigates the potential of nonlocally filtered pursuit monostatic TanDEM-X data for coastline detection in comparison to conventional TanDEM-X data, i.e. image pairs acquired in repeat-pass or bistatic mode. For this task, an unsupervised coastline detection procedure based on scale-space representations and K-medians clustering as well as morphological image post-processing is proposed. Since this procedure exploits a clear discriminability of "dark" and "bright" appearances of water and land surfaces, respectively, in both SAR amplitude and coherence imagery, TanDEM-X InSAR data acquired in pursuit monostatic mode is expected to provide a promising benefit. In addition, we investigate the benefit introduced by a utilization of a non-local InSAR filter for amplitude denoising and coherence estimation instead of a conventional box-car filter. Experiments carried out on real TanDEM-X pursuit monostatic data confirm our expectations and illustrate the advantage of the employed data configuration over conventional TanDEM-X products for automatic coastline detection

    Deep Image Translation With an Affinity-Based Change Prior for Unsupervised Multimodal Change Detection

    Get PDF
    © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Image translation with convolutional neural networks has recently been used as an approach to multimodal change detection. Existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. A main challenge in the unsupervised problem setting is to avoid that change pixels affect the learning of the translation function. We propose two new network architectures trained with loss functions weighted by priors that reduce the impact of change pixels on the learning objective. The change prior is derived in an unsupervised fashion from relational pixel information captured by domain-specific affinity matrices. Specifically, we use the vertex degrees associated with an absolute affinity difference matrix and demonstrate their utility in combination with cycle consistency and adversarial training. The proposed neural networks are compared with the state-of-the-art algorithms. Experiments conducted on three real data sets show the effectiveness of our methodology

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies

    Mixture of Latent Variable Models for Remotely Sensed Image Processing

    Get PDF
    The processing of remotely sensed data is innately an inverse problem where properties of spatial processes are inferred from the observations based on a generative model. Meaningful data inversion relies on well-defined generative models that capture key factors in the relationship between the underlying physical process and the measurements. Unfortunately, as two mainstream data processing techniques, both mixture models and latent variables models (LVM) are inadequate in describing the complex relationship between the spatial process and the remote sensing data. Consequently, mixture models, such as K-Means, Gaussian Mixture Model (GMM), Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA), characterize a class by statistics in the original space, ignoring the fact that a class can be better represented by discriminative signals in the hidden/latent feature space, while LVMs, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Sparse Representation (SR), seek representational signals in the whole image scene that involves multiple spatial processes, neglecting the fact that signal discovery for individual processes is more efficient. Although the combined use of mixture model and LVMs is required for remote sensing data analysis, there is still a lack of systematic exploration on this important topic in remote sensing literature. Driven by the above considerations, this thesis therefore introduces a mixture of LVM (MLVM) framework for combining the mixture models and LVMs, under which three models are developed in order to address different aspects of remote sensing data processing: (1) a mixture of probabilistic SR (MPSR) is proposed for supervised classification of hyperspectral remote sensing imagery, considering that SR is an emerging and powerful technique for feature extraction and data representation; (2) a mixture model of K “Purified” means (K-P-Means) is proposed for addressing the spectral endmember estimation, which is a fundamental issue in remote sensing data analysis; (3) and a clustering-based PCA model is introduced for SAR image denoising. Under a unified optimization scheme, all models are solved via Expectation and Maximization (EM) algorithm, by iteratively estimating the two groups of parameters, i.e., the labels of pixels and the latent variables. Experiments on simulated data and real remote sensing data demonstrate the advantages of the proposed models in the respective applications

    SAR Image Edge Detection: Review and Benchmark Experiments

    Get PDF
    Edges are distinct geometric features crucial to higher level object detection and recognition in remote-sensing processing, which is a key for surveillance and gathering up-to-date geospatial intelligence. Synthetic aperture radar (SAR) is a powerful form of remote-sensing. However, edge detectors designed for optical images tend to have low performance on SAR images due to the presence of the strong speckle noise-causing false-positives (type I errors). Therefore, many researchers have proposed edge detectors that are tailored to deal with the SAR image characteristics specifically. Although these edge detectors might achieve effective results on their own evaluations, the comparisons tend to include a very limited number of (simulated) SAR images. As a result, the generalized performance of the proposed methods is not truly reflected, as real-world patterns are much more complex and diverse. From this emerges another problem, namely, a quantitative benchmark is missing in the field. Hence, it is not currently possible to fairly evaluate any edge detection method for SAR images. Thus, in this paper, we aim to close the aforementioned gaps by providing an extensive experimental evaluation for SAR images on edge detection. To that end, we propose the first benchmark on SAR image edge detection methods established by evaluating various freely available methods, including methods that are considered to be the state of the art

    Code-Aligned Autoencoders for Unsupervised Change Detection in Multimodal Remote Sensing Images

    Get PDF
    Image translation with convolutional autoencoders has recently been used as an approach to multimodal change detection (CD) in bitemporal satellite images. A main challenge is the alignment of the code spaces by reducing the contribution of change pixels to the learning of the translation function. Many existing approaches train the networks by exploiting supervised information of the change areas, which, however, is not always available. We propose to extract relational pixel information captured by domain-specific affinity matrices at the input and use this to enforce alignment of the code spaces and reduce the impact of change pixels on the learning objective. A change prior is derived in an unsupervised fashion from pixel pair affinities that are comparable across domains. To achieve code space alignment, we enforce pixels with similar affinity relations in the input domains to be correlated also in code space. We demonstrate the utility of this procedure in combination with cycle consistency. The proposed approach is compared with the state-of-the-art machine learning and deep learning algorithms. Experiments conducted on four real and representative datasets show the effectiveness of our methodology
    corecore