844 research outputs found

    Column-Spatial Correction Network for Remote Sensing Image Destriping

    Get PDF
    The stripe noise in the multispectral remote sensing images, possibly resulting from the instrument instability, slit contamination, and light interference, significantly degrades the imaging quality and impairs high-level visual tasks. The local consistency of homogeneous region in striped images is damaged because of the different gains and offsets of adjacent sensors regarding the same ground object, which leads to the structural characteristics of stripe noise. This can be characterized by the increased differences between columns in the remote sensing image. Therefore, the destriping can be viewed as a process of improving the local consistency of homogeneous region and the global uniformity of whole image. In recent years, convolutional neural network (CNN)-based models have been introduced to destriping tasks, and have achieved advanced results, relying on their powerful representation ability. Therefore, to effectively leverage both CNNs and the structural characteristics of stripe noise, we propose a multi-scaled column-spatial correction network (CSCNet) for remote sensing image destriping, in which the local structural characteristic of stripe noise and the global contextual information of the image are both explored at multiple feature scales. More specifically, the column-based correction module (CCM) and spatial-based correction module (SCM) were designed to improve the local consistency and global uniformity from the perspectives of column correction and full image correction, respectively. Moreover, a feature fusion module based on the channel attention mechanism was created to obtain discriminative features derived from different modules and scales. We compared the proposed model against both traditional and deep learning methods on simulated and real remote sensing images. The promising results indicate that CSCNet effectively removes image stripes and outperforms state-of-the-art methods in terms of qualitative and quantitative assessments

    Removing striping artifacts in light-sheet fluorescence microscopy: a review

    Get PDF
    In recent years, light-sheet fluorescence microscopy (LSFM) has found a broad application for imaging of diverse biological samples, ranging from sub-cellular structures to whole animals, both in-vivo and ex-vivo, owing to its many advantages relative to point-scanning methods. By providing the selective illumination of sample single planes, LSFM achieves an intrinsic optical sectioning and direct 2D image acquisition, with low out-of-focus fluorescence background, sample photo-damage and photo-bleaching. On the other hand, such an illumination scheme is prone to light absorption or scattering effects, which lead to uneven illumination and striping artifacts in the images, oriented along the light sheet propagation direction. Several methods have been developed to address this issue, ranging from fully optical solutions to entirely digital post-processing approaches. In this work, we present them, outlining their advantages, performance and limitations

    Removing striping artifacts in light-sheet fluorescence microscopy: a review

    Get PDF
    In recent years, light-sheet fluorescence microscopy (LSFM) has found a broad application for imaging of diverse biological samples, ranging from sub-cellular structures to whole animals, both in-vivo and ex-vivo, owing to its many advantages relative to point-scanning methods. By providing the selective illumination of sample single planes, LSFM achieves an intrinsic optical sectioning and direct 2D image acquisition, with low out-of-focus fluorescence background, sample photo-damage and photo-bleaching. On the other hand, such an illumination scheme is prone to light absorption or scattering effects, which lead to uneven illumination and striping artifacts in the images, oriented along the light sheet propagation direction. Several methods have been developed to address this issue, ranging from fully optical solutions to entirely digital post-processing approaches. In this work, we present them, outlining their advantages, performance and limitations

    Evaluating visible derivative spectroscopy by varimax-rotated, principal component analysis of aerial hyperspectral images from the western basin of Lake Erie

    Get PDF
    The Kent State University (KSU) spectral decomposition method provides information about the spectral signals present in multispectral and hyperspectral images. Pre-processing steps that enhance signal to noise ratio (SNR) by 7.37–19.04 times, enables extraction of the environmental signals captured by the National Aeronautics and Space Administration (NASA) Glenn Research Center\u27s, second generation, Hyperspectral imager (HSI2) into multiple, independent components. We have accomplished this by pre-processing of Level 1 HSI2 data to remove stripes from the scene, followed by a combination of spectral and spatial smoothing to further increase the SNR and remove non-Lambertian features, such as waves. On average, the residual stochastic noise removed from the HSI2 images by this method is 5.43 ± 1.42%. The method also enables removal of a spectrally coherent residual atmospheric bias of 4.28 ± 0.48%, ascribed to incomplete atmospheric correction. The total noise isolated from signal by the method is thu

    A General Destriping Framework for Remote Sensing Images Using Flatness Constraint

    Full text link
    This paper proposes a general destriping framework using flatness constraints, where we can handle various regularization functions in a unified manner. Removing stripe noise, i.e., destriping, from remote sensing images is an essential task in terms of visual quality and subsequent processing. Most of the existing methods are designed by combining a particular image regularization with a stripe noise characterization that cooperates with the regularization, which precludes us to examine different regularizations to adapt to various target images. To resolve this, we formulate the destriping problem as a convex optimization problem involving a general form of image regularization and the flatness constraints, a newly introduced stripe noise characterization. This strong characterization enables us to consistently capture the nature of stripe noise, regardless of the choice of image regularization. For solving the optimization problem, we also develop an efficient algorithm based on a diagonally preconditioned primal-dual splitting algorithm (DP-PDS), which can automatically adjust the stepsizes. The effectiveness of our framework is demonstrated through destriping experiments, where we comprehensively compare combinations of image regularizations and stripe noise characterizations using hyperspectral images (HSI) and infrared (IR) videos.Comment: submitted to IEEE Transactions on Geoscience and Remote Sensin

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Vicarious Methodologies to Assess and Improve the Quality of the Optical Remote Sensing Images: A Critical Review

    Get PDF
    Over the past decade, number of optical Earth observing satellites performing remote sensing has increased substantially, dramatically increasing the capability to monitor the Earth. The quantity of remote sensing satellite increase is primarily driven by improved technology, miniaturization of components, reduced manufacturing, and launch cost. These satellites often lack on-board calibrators that a large satellite utilizes to ensure high quality (e.g., radiometric, geometric, spatial quality, etc.) scientific measurement. To address this issue, this work presents “best” vicarious image quality assessment and improvement techniques for those kinds of optical satellites which lacks on-board calibration system. In this article, image quality categories have been explored, and essential quality parameters (e.g., absolute and relative calibration, aliasing, etc.) have been identified. For each of the parameters, appropriate characterization methods are identified along with its specifications or requirements. In cases of multiple methods, recommendation has been made based-on the strengths and weaknesses of each method. Furthermore, processing steps have been presented, including examples. Essentially, this paper provides a comprehensive study of the criteria that needs to be assessed to evaluate remote sensing satellite data quality, and best vicarious methodologies to evaluate identified quality parameters such as coherent noise, ground sample distance, etc

    Optical Coherence Tomography guided Laser-Cochleostomy

    Get PDF
    Despite the high precision of laser, it remains challenging to control the laser-bone ablation without injuring the underlying critical structures. Providing an axial resolution on micrometre scale, OCT is a promising candidate for imaging microstructures beneath the bone surface and monitoring the ablation process. In this work, a bridge connecting these two technologies is established. A closed-loop control of laser-bone ablation under the monitoring with OCT has been successfully realised

    Seeing in the dark – I. Multi-epoch alchemy

    Get PDF
    Weak lensing by large-scale structure is an invaluable cosmological tool given that most of the energy density of the concordance cosmology is invisible. Several large ground-based imaging surveys will attempt to measure this effect over the coming decade, but reliable control of the spurious lensing signal introduced by atmospheric turbulence and telescope optics remains a challenging problem. We address this challenge with a demonstration that point spread function (PSF) effects on measured galaxy shapes in the Sloan Digital Sky Survey (SDSS) can be corrected with existing analysis techniques. In this work, we co-add existing SDSS imaging on the equatorial stripe in order to build a data set with the statistical power to measure cosmic shear, while using a rounding kernel method to null out the effects of the anisotropic PSF. We build a galaxy catalogue from the combined imaging, characterize its photometric properties and show that the spurious shear remaining in this catalogue after the PSF correction is negligible compared to the expected cosmic shear signal. We identify a new source of systematic error in the shear–shear autocorrelations arising from selection biases related to masking. Finally, we discuss the circumstances in which this method is expected to be useful for upcoming ground-based surveys that have lensing as one of the science goals, and identify the systematic errors that can reduce its efficacy

    Fast Objective Coupled Planar Illumination Microscopy

    Get PDF
    Among optical imaging techniques light sheet fluorescence microscopy stands out as one of the most attractive for capturing high-speed biological dynamics unfolding in three dimensions. The technique is potentially millions of times faster than point-scanning techniques such as two-photon microscopy. This potential is especially poignant for neuroscience applications due to the fact that interactions between neurons transpire over mere milliseconds within tissue volumes spanning hundreds of cubic microns. However current-generation light sheet microscopes are limited by volume scanning rate and/or camera frame rate. We begin by reviewing the optical principles underlying light sheet fluorescence microscopy and the origin of these rate bottlenecks. We present an analysis leading us to the conclusion that Objective Coupled Planar Illumination (OCPI) microscopy is a particularly promising technique for recording the activity of large populations of neurons at high sampling rate. We then present speed-optimized OCPI microscopy, the first fast light sheet technique to avoid compromising image quality or photon efficiency. We enact two strategies to develop the fast OCPI microscope. First, we devise a set of optimizations that increase the rate of the volume scanning system to 40 Hz for volumes up to 700 microns thick. Second, we introduce Multi-Camera Image Sharing (MCIS), a technique to scale imaging rate by incorporating additional cameras. MCIS can be applied not only to OCPI but to any widefield imaging technique, circumventing the limitations imposed by the camera. Detailed design drawings are included to aid in dissemination to other research groups. We also demonstrate fast calcium imaging of the larval zebrafish brain and find a heartbeat-induced motion artifact. We recommend a new preprocessing step to remove the artifact through filtering. This step requires a minimal sampling rate of 15 Hz, and we expect it to become a standard procedure in zebrafish imaging pipelines. In the last chapter we describe essential computational considerations for controlling a fast OCPI microscope and processing the data that it generates. We introduce a new image processing pipeline developed to maximize computational efficiency when analyzing these multi-terabyte datasets, including a novel calcium imaging deconvolution algorithm. Finally we provide a demonstration of how combined innovations in microscope hardware and software enable inference of predictive relationships between neurons, a promising complement to more conventional correlation-based analyses
    • …
    corecore