491 research outputs found

    A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems

    Full text link
    Non-Local Total Variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the Structure Tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the non-local variations, jointly for the different components, through various ℓ1,p\ell_{1,p} matrix norms with p≥1p \ge 1. To facilitate the choice of the hyper-parameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for multispectral and hyperspectral images. The results demonstrate the interest of introducing a non-local structure tensor regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods

    Remote sensing image fusion via compressive sensing

    Get PDF
    In this paper, we propose a compressive sensing-based method to pan-sharpen the low-resolution multispectral (LRM) data, with the help of high-resolution panchromatic (HRP) data. In order to successfully implement the compressive sensing theory in pan-sharpening, two requirements should be satisfied: (i) forming a comprehensive dictionary in which the estimated coefficient vectors are sparse; and (ii) there is no correlation between the constructed dictionary and the measurement matrix. To fulfill these, we propose two novel strategies. The first is to construct a dictionary that is trained with patches across different image scales. Patches at different scales or equivalently multiscale patches provide texture atoms without requiring any external database or any prior atoms. The redundancy of the dictionary is removed through K-singular value decomposition (K-SVD). Second, we design an iterative l1-l2 minimization algorithm based on alternating direction method of multipliers (ADMM) to seek the sparse coefficient vectors. The proposed algorithm stacks missing high-resolution multispectral (HRM) data with the captured LRM data, so that the latter is used as a constraint for the estimation of the former during the process of seeking the representation coefficients. Three datasets are used to test the performance of the proposed method. A comparative study between the proposed method and several state-of-the-art ones shows its effectiveness in dealing with complex structures of remote sensing imagery

    Non-local tensor completion for multitemporal remotely sensed images inpainting

    Get PDF
    Remotely sensed images may contain some missing areas because of poor weather conditions and sensor failure. Information of those areas may play an important role in the interpretation of multitemporal remotely sensed data. The paper aims at reconstructing the missing information by a non-local low-rank tensor completion method (NL-LRTC). First, nonlocal correlations in the spatial domain are taken into account by searching and grouping similar image patches in a large search window. Then low-rankness of the identified 4-order tensor groups is promoted to consider their correlations in spatial, spectral, and temporal domains, while reconstructing the underlying patterns. Experimental results on simulated and real data demonstrate that the proposed method is effective both qualitatively and quantitatively. In addition, the proposed method is computationally efficient compared to other patch based methods such as the recent proposed PM-MTGSR method

    Optimal spectral reconstructions from deterministic and stochastic sampling geometries using compressive sensing and spectral statistical models

    Get PDF
    This dissertation focuses on the development of high-quality image reconstruction methods from a limited number of Fourier samples using optimized, stochastic and deterministic sampling geometries. Two methodologies are developed: an optimal image reconstruction framework based on Compressive Sensing (CS) techniques and a new, Spectral Statistical approach based on the use of isotropic models over a dyadic partitioning of the spectrum. The proposed methods are demonstrated in applications in reconstructing fMRI and remote sensing imagery. Typically, a reduction in MRI image acquisition time is achieved by sampling K-space at a rate below the Nyquist rate. Various methods using correlation between samples, sample averaging, and more recently, Compressive Sensing, are employed to mitigate the aliasing effects of under-sampled Fourier data. The proposed solution utilizes an additional layer of optimization to enhance the performance of a previously published CS reconstruction algorithm. Specifically, the new framework provides reconstructions of a desired image quality by jointly optimizing for the optimal K-space sampling geometry and CS model parameters. The effectiveness of each geometry is evaluated based on the required number of FFT samples that are available for image reconstructions of sufficient quality. A central result of this approach is that the fastest geometry, the spiral low-pass geometry has also provided the best (optimized) CS reconstructions. This geometry provided significantly better reconstructions than the stochastic sampling geometries recommended in the literature. An optimization framework for selecting appropriate CS model reconstruction parameters is also provided. Here, the term appropriate CS parameters\u27 is meant to infer that the estimated parameter ranges can provide some guarantee for a minimum level of image reconstruction performance. Utilizing the simplex search algorithm, the optimal TV-norm and Wavelet transform penalties are calculated for the CS reconstruction objective function. Collecting the functional evaluation values of the simplex search over a large data set allows for a range of objective function weighting parameters to be defined for the sampling geometries that were found to be effective. The results indicate that the CS parameter optimization framework is significant in that it can provide for large improvements over the standard use of non-optimized approaches. The dissertation also develops the use of a new Spectral Statistical approach for spectral reconstruction of remote sensing imagery. The motivation for pursuing this research includes potential applications that include, but are not limited to, the development of better image compression schemas based on a limited number of spectral coefficients. In addition, other applications include the use of spectral interpolation methods for remote sensing systems that directly sample the Fourier domain optically or electromagnetically, which may suffer from missing or degraded samples beyond and/or within the focal plane. For these applications, a new spectral statistical methodology is proposed that reconstructs spectral data from uniformly spaced samples over a dyadic partition of the spectrum. Unlike the CS approach that solves for the 2D FFT coefficients directly, the statistical approach uses separate models for the magnitude and phase, allowing for separate control of the reconstruction quality of each one. A scalable solution that partitions the spectral domain into blocks of varying size allows for the determination of the appropriate covariance models of the magnitude and phase spectra bounded by the blocks. The individual spectral models are then applied to solving for the optimal linear estimate, which is referred to in literature as Kriging. The use of spectral data transformations are also presented as a means for producing data that is better suited for statistical modeling and variogram estimation. A logarithmic transformation is applied to the magnitude spectra, as it has been shown to impart intrinsic stationarity over localized, bounded regions of the spectra. Phase spectra resulting from the 2D FFT can be best described as being uniformly distributed over the interval of -pi to pi. In this original state, the spectral samples fail to produce appropriate spectral statistical models that exhibit inter-sample covariance. For phase spectra modeling, an unwrapping step is required to ensure that individual blocks can be effectively modeled using appropriate variogram models. The transformed magnitude and unwrapped phase spectra result in unique statistical models that are optimal over individual frequency blocks, which produce accurate spectral reconstructions that account for localized variability in the spectral domain. The Kriging spectral estimates are shown to produce higher quality magnitude and phase spectra reconstructions than the cubic spline, nearest neighbor, and bilinear interpolators that are widely used. Even when model assumptions, such as isotropy, violate the spectral data being modeled, excellent reconstructions are still obtained. Finally, both of the spectral estimation methods developed in this dissertation are compared against one another, revealing how each one of the methods developed here is appropriate for different classes of images. For satellite images that contain a large amount of detail, the new spectral statistical approach, reconstructing the spectrum much faster, from a fraction of the original high frequency content, provided significantly better reconstructions than the best reconstructions from the optimized CS geometries. This result is supported not only by comparing image quality metrics, but also by visual assessment.\u2

    A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

    Get PDF
    Multidimensional optical imaging has seen remarkable growth in the past decade. Rather than measuring only the two-dimensional spatial distribution of light, as in conventional photography, multidimensional optical imaging captures light in up to nine dimensions, providing unprecedented information about incident photons’ spatial coordinates, emittance angles, wavelength, time, and polarization. Multidimensional optical imaging can be accomplished either by scanning or parallel acquisition. Compared with scanning-based imagers, parallel acquisition–also dubbed snapshot imaging–has a prominent advantage in maximizing optical throughput, particularly when measuring a datacube of high dimensions. Here, we first categorize snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlight the snapshot advantage in the context of optical throughput, and finally we discuss their state-of-the-art implementations and applications

    Infrared Image Super-Resolution: Systematic Review, and Future Trends

    Full text link
    Image Super-Resolution (SR) is essential for a wide range of computer vision and image processing tasks. Investigating infrared (IR) image (or thermal images) super-resolution is a continuing concern within the development of deep learning. This survey aims to provide a comprehensive perspective of IR image super-resolution, including its applications, hardware imaging system dilemmas, and taxonomy of image processing methodologies. In addition, the datasets and evaluation metrics in IR image super-resolution tasks are also discussed. Furthermore, the deficiencies in current technologies and possible promising directions for the community to explore are highlighted. To cope with the rapid development in this field, we intend to regularly update the relevant excellent work at \url{https://github.com/yongsongH/Infrared_Image_SR_SurveyComment: Submitted to IEEE TNNL

    Framework to Create Cloud-Free Remote Sensing Data Using Passenger Aircraft as the Platform

    Get PDF
    Cloud removal in optical remote sensing imagery is essential for many Earth observation applications.Due to the inherent imaging geometry features in satellite remote sensing, it is impossible to observe the ground under the clouds directly; therefore, cloud removal algorithms are always not perfect owing to the loss of ground truth. Passenger aircraft have the advantages of short visitation frequency and low cost. Additionally, because passenger aircraft fly at lower altitudes compared to satellites, they can observe the ground under the clouds at an oblique viewing angle. In this study, we examine the possibility of creating cloud-free remote sensing data by stacking multi-angle images captured by passenger aircraft. To accomplish this, a processing framework is proposed, which includes four main steps: 1) multi-angle image acquisition from passenger aircraft, 2) cloud detection based on deep learning semantic segmentation models, 3) cloud removal by image stacking, and 4) image quality enhancement via haze removal. This method is intended to remove cloud contamination without the requirements of reference images and pre-determination of cloud types. The proposed method was tested in multiple case studies, wherein the resultant cloud- and haze-free orthophotos were visualized and quantitatively analyzed in various land cover type scenes. The results of the case studies demonstrated that the proposed method could generate high quality, cloud-free orthophotos. Therefore, we conclude that this framework has great potential for creating cloud-free remote sensing images when the cloud removal of satellite imagery is difficult or inaccurate

    mHealth hyperspectral learning for instantaneous spatiospectral imaging of hemodynamics

    Full text link
    Hyperspectral imaging acquires data in both the spatial and frequency domains to offer abundant physical or biological information. However, conventional hyperspectral imaging has intrinsic limitations of bulky instruments, slow data acquisition rate, and spatiospectral tradeoff. Here we introduce hyperspectral learning for snapshot hyperspectral imaging in which sampled hyperspectral data in a small subarea are incorporated into a learning algorithm to recover the hypercube. Hyperspectral learning exploits the idea that a photograph is more than merely a picture and contains detailed spectral information. A small sampling of hyperspectral data enables spectrally informed learning to recover a hypercube from an RGB image. Hyperspectral learning is capable of recovering full spectroscopic resolution in the hypercube, comparable to high spectral resolutions of scientific spectrometers. Hyperspectral learning also enables ultrafast dynamic imaging, leveraging ultraslow video recording in an off-the-shelf smartphone, given that a video comprises a time series of multiple RGB images. To demonstrate its versatility, an experimental model of vascular development is used to extract hemodynamic parameters via statistical and deep-learning approaches. Subsequently, the hemodynamics of peripheral microcirculation is assessed at an ultrafast temporal resolution up to a millisecond, using a conventional smartphone camera. This spectrally informed learning method is analogous to compressed sensing; however, it further allows for reliable hypercube recovery and key feature extractions with a transparent learning algorithm. This learning-powered snapshot hyperspectral imaging method yields high spectral and temporal resolutions and eliminates the spatiospectral tradeoff, offering simple hardware requirements and potential applications of various machine-learning techniques.Comment: This paper will appear in PNAS Nexu
    • …
    corecore