5,862 research outputs found

    A convex formulation for hyperspectral image superresolution via subspace-based regularization

    Full text link
    Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images which combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector Total Variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the non-quadratic and non-smooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally "live" in a low-dimensional subspace and by tailoring the Split Augmented Lagrangian Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction Method of Multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state-of-the-art, as illustrated in a series of experiments with simulated and real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    A Framework for Fast Image Deconvolution with Incomplete Observations

    Full text link
    In image deconvolution problems, the diagonalization of the underlying operators by means of the FFT usually yields very large speedups. When there are incomplete observations (e.g., in the case of unknown boundaries), standard deconvolution techniques normally involve non-diagonalizable operators, resulting in rather slow methods, or, otherwise, use inexact convolution models, resulting in the occurrence of artifacts in the enhanced images. In this paper, we propose a new deconvolution framework for images with incomplete observations that allows us to work with diagonalized convolution operators, and therefore is very fast. We iteratively alternate the estimation of the unknown pixels and of the deconvolved image, using, e.g., an FFT-based deconvolution method. This framework is an efficient, high-quality alternative to existing methods of dealing with the image boundaries, such as edge tapering. It can be used with any fast deconvolution method. We give an example in which a state-of-the-art method that assumes periodic boundary conditions is extended, through the use of this framework, to unknown boundary conditions. Furthermore, we propose a specific implementation of this framework, based on the alternating direction method of multipliers (ADMM). We provide a proof of convergence for the resulting algorithm, which can be seen as a "partial" ADMM, in which not all variables are dualized. We report experimental comparisons with other primal-dual methods, where the proposed one performed at the level of the state of the art. Four different kinds of applications were tested in the experiments: deconvolution, deconvolution with inpainting, superresolution, and demosaicing, all with unknown boundaries.Comment: IEEE Trans. Image Process., to be published. 15 pages, 11 figures. MATLAB code available at https://github.com/alfaiate/DeconvolutionIncompleteOb

    Improvement of Spatial Resolution with Staggered Arrays As Used in The Airborne Optical Sensor Ads40

    Get PDF
    Using pushbroom sensors onboard aircrafts or satellites requires, especially for photogrammetric applications, wide image swaths with a high geometric resolution. One approach to satisfy both demands is to use staggered line arrays, which are constructed from two identical CCD lines shifted against each other by half a picel in line direction. Practical applications of such arrays in remote sensing include SPOT, and in the commercial environment the Airborne Digital Sensor, or ADS40, from Leica Geosystems. Theoretically, the usefulness of staggered arrays depends from spatial reslution, which is defined by the total point spread function of the imaging system and Shannon's sampling theorem. Due to the two shifted sensor lines staggering results in a double number of sampling points perpendicular to the flight direction. In order to simultaneously double the sample number in the flight direction, the line readout rate, or integration time, has to produce half a pixel spacing on ground. Staggering in combination with a high-resolution optical system can be used to fulfil the sampling condition, which means that no spectral components above the critical spatial frequency 2/D are present. Theoretically, the resolution is as good for a non-staggered line with half pixel size D/2, but radiometric dynamics should be twice as high. In practice, the slightly different viewing angle of both lines of a staggered array can result in a deteration of image quality due to aircraft motion, attitude fluctuations or terrain undulation. Fulfilling the sampling condition further means that no aliasing occurs. This is essential for the image quality in quasiperiodical textured image areas and for photogrammetric sub-pixel accuracy. Furthermore, image restoration methods for enhancing the image quality can be applied more efficently. The panchromatic resolution of the ADS40 opties is optimised for image collection by a staggered array. This means, it transfers spatial frequencies of twice the Nyquist frequency of its 12k sensors. First experiments, which were carried out some years ago, indicated alrady a spatial resolution improvement by using image restitution the ADS 40 staggered 12k pairs. The results of the restitution algorithm, which is integrated in the ADS image processing flow, has now been analysed quantitatively. This paper presents the theory of high resolution image restitution from staggered lines and practical results with ADS40 high resolution panchromatic images and high resolution colour images, created by sharpening 12k colour images with high resolution pan-chromatic ones
    • …
    corecore