17 research outputs found

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    A Comparison of Image Denoising Methods

    Full text link
    The advancement of imaging devices and countless images generated everyday pose an increasingly high demand on image denoising, which still remains a challenging task in terms of both effectiveness and efficiency. To improve denoising quality, numerous denoising techniques and approaches have been proposed in the past decades, including different transforms, regularization terms, algebraic representations and especially advanced deep neural network (DNN) architectures. Despite their sophistication, many methods may fail to achieve desirable results for simultaneous noise removal and fine detail preservation. In this paper, to investigate the applicability of existing denoising techniques, we compare a variety of denoising methods on both synthetic and real-world datasets for different applications. We also introduce a new dataset for benchmarking, and the evaluations are performed from four different perspectives including quantitative metrics, visual effects, human ratings and computational cost. Our experiments demonstrate: (i) the effectiveness and efficiency of representative traditional denoisers for various denoising tasks, (ii) a simple matrix-based algorithm may be able to produce similar results compared with its tensor counterparts, and (iii) the notable achievements of DNN models, which exhibit impressive generalization ability and show state-of-the-art performance on various datasets. In spite of the progress in recent years, we discuss shortcomings and possible extensions of existing techniques. Datasets, code and results are made publicly available and will be continuously updated at https://github.com/ZhaomingKong/Denoising-Comparison.Comment: In this paper, we intend to collect and compare various denoising methods to investigate their effectiveness, efficiency, applicability and generalization ability with both synthetic and real-world experiment

    Column-Spatial Correction Network for Remote Sensing Image Destriping

    Get PDF
    The stripe noise in the multispectral remote sensing images, possibly resulting from the instrument instability, slit contamination, and light interference, significantly degrades the imaging quality and impairs high-level visual tasks. The local consistency of homogeneous region in striped images is damaged because of the different gains and offsets of adjacent sensors regarding the same ground object, which leads to the structural characteristics of stripe noise. This can be characterized by the increased differences between columns in the remote sensing image. Therefore, the destriping can be viewed as a process of improving the local consistency of homogeneous region and the global uniformity of whole image. In recent years, convolutional neural network (CNN)-based models have been introduced to destriping tasks, and have achieved advanced results, relying on their powerful representation ability. Therefore, to effectively leverage both CNNs and the structural characteristics of stripe noise, we propose a multi-scaled column-spatial correction network (CSCNet) for remote sensing image destriping, in which the local structural characteristic of stripe noise and the global contextual information of the image are both explored at multiple feature scales. More specifically, the column-based correction module (CCM) and spatial-based correction module (SCM) were designed to improve the local consistency and global uniformity from the perspectives of column correction and full image correction, respectively. Moreover, a feature fusion module based on the channel attention mechanism was created to obtain discriminative features derived from different modules and scales. We compared the proposed model against both traditional and deep learning methods on simulated and real remote sensing images. The promising results indicate that CSCNet effectively removes image stripes and outperforms state-of-the-art methods in terms of qualitative and quantitative assessments

    Geodesic Active Fields:A Geometric Framework for Image Registration

    Get PDF
    Image registration is the concept of mapping homologous points in a pair of images. In other words, one is looking for an underlying deformation field that matches one image to a target image. The spectrum of applications of image registration is extremely large: It ranges from bio-medical imaging and computer vision, to remote sensing or geographic information systems, and even involves consumer electronics. Mathematically, image registration is an inverse problem that is ill-posed, which means that the exact solution might not exist or not be unique. In order to render the problem tractable, it is usual to write the problem as an energy minimization, and to introduce additional regularity constraints on the unknown data. In the case of image registration, one often minimizes an image mismatch energy, and adds an additive penalty on the deformation field regularity as smoothness prior. Here, we focus on the registration of the human cerebral cortex. Precise cortical registration is required, for example, in statistical group studies in functional MR imaging, or in the analysis of brain connectivity. In particular, we work with spherical inflations of the extracted hemispherical surface and associated features, such as cortical mean curvature. Spatial mapping between cortical surfaces can then be achieved by registering the respective spherical feature maps. Despite the simplified spherical geometry, inter-subject registration remains a challenging task, mainly due to the complexity and inter-subject variability of the involved brain structures. In this thesis, we therefore present a registration scheme, which takes the peculiarities of the spherical feature maps into particular consideration. First, we realize that we need an appropriate hierarchical representation, so as to coarsely align based on the important structures with greater inter-subject stability, before taking smaller and more variable details into account. Based on arguments from brain morphogenesis, we propose an anisotropic scale-space of mean-curvature maps, built around the Beltrami framework. Second, inspired by concepts from vision-related elements of psycho-physical Gestalt theory, we hypothesize that anisotropic Beltrami regularization better suits the requirements of image registration regularization, compared to traditional Gaussian filtering. Different objects in an image should be allowed to move separately, and regularization should be limited to within the individual Gestalts. We render the regularization feature-preserving by limiting diffusion across edges in the deformation field, which is in clear contrast to the indifferent linear smoothing. We do so by embedding the deformation field as a manifold in higher-dimensional space, and minimize the associated Beltrami energy which represents the hyperarea of this embedded manifold as measure of deformation field regularity. Further, instead of simply adding this regularity penalty to the image mismatch in lieu of the standard penalty, we propose to incorporate the local image mismatch as weighting function into the Beltrami energy. The image registration problem is thus reformulated as a weighted minimal surface problem. This approach has several appealing aspects, including (1) invariance to re-parametrization and ability to work with images defined on non-flat, Riemannian domains (e.g., curved surfaces, scalespaces), and (2) intrinsic modulation of the local regularization strength as a function of the local image mismatch and/or noise level. On a side note, we show that the proposed scheme can easily keep up with recent trends in image registration towards using diffeomorphic and inverse consistent deformation models. The proposed registration scheme, called Geodesic Active Fields (GAF), is non-linear and non-convex. Therefore we propose an efficient optimization scheme, based on splitting. Data-mismatch and deformation field regularity are optimized over two different deformation fields, which are constrained to be equal. The constraint is addressed using an augmented Lagrangian scheme, and the resulting optimization problem is solved efficiently using alternate minimization of simpler sub-problems. In particular, we show that the proposed method can easily compete with state-of-the-art registration methods, such as Demons. Finally, we provide an implementation of the fast GAF method on the sphere, so as to register the triangulated cortical feature maps. We build an automatic parcellation algorithm for the human cerebral cortex, which combines the delineations available on a set of atlas brains in a Bayesian approach, so as to automatically delineate the corresponding regions on a subject brain given its feature map. In a leave-one-out cross-validation study on 39 brain surfaces with 35 manually delineated gyral regions, we show that the pairwise subject-atlas registration with the proposed spherical registration scheme significantly improves the individual alignment of cortical labels between subject and atlas brains, and, consequently, that the estimated automatic parcellations after label fusion are of better quality
    corecore