2,740 research outputs found

    A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR

    Get PDF
    L1L_1 regularization is used for finding sparse solutions to an underdetermined linear system. As sparse signals are widely expected in remote sensing, this type of regularization scheme and its extensions have been widely employed in many remote sensing problems, such as image fusion, target detection, image super-resolution, and others and have led to promising results. However, solving such sparse reconstruction problems is computationally expensive and has limitations in its practical use. In this paper, we proposed a novel efficient algorithm for solving the complex-valued L1L_1 regularized least squares problem. Taking the high-dimensional tomographic synthetic aperture radar (TomoSAR) as a practical example, we carried out extensive experiments, both with simulation data and real data, to demonstrate that the proposed approach can retain the accuracy of second order methods while dramatically speeding up the processing by one or two orders. Although we have chosen TomoSAR as the example, the proposed method can be generally applied to any spectral estimation problems.Comment: 11 pages, IEEE Transactions on Geoscience and Remote Sensin

    A convex formulation for hyperspectral image superresolution via subspace-based regularization

    Full text link
    Hyperspectral remote sensing images (HSIs) usually have high spectral resolution and low spatial resolution. Conversely, multispectral images (MSIs) usually have low spectral and high spatial resolutions. The problem of inferring images which combine the high spectral and high spatial resolutions of HSIs and MSIs, respectively, is a data fusion problem that has been the focus of recent active research due to the increasing availability of HSIs and MSIs retrieved from the same geographical area. We formulate this problem as the minimization of a convex objective function containing two quadratic data-fitting terms and an edge-preserving regularizer. The data-fitting terms account for blur, different resolutions, and additive noise. The regularizer, a form of vector Total Variation, promotes piecewise-smooth solutions with discontinuities aligned across the hyperspectral bands. The downsampling operator accounting for the different spatial resolutions, the non-quadratic and non-smooth nature of the regularizer, and the very large size of the HSI to be estimated lead to a hard optimization problem. We deal with these difficulties by exploiting the fact that HSIs generally "live" in a low-dimensional subspace and by tailoring the Split Augmented Lagrangian Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction Method of Multipliers (ADMM), to this optimization problem, by means of a convenient variable splitting. The spatial blur and the spectral linear operators linked, respectively, with the HSI and MSI acquisition processes are also estimated, and we obtain an effective algorithm that outperforms the state-of-the-art, as illustrated in a series of experiments with simulated and real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe

    Compressive Time Delay Estimation Using Interpolation

    Full text link
    Time delay estimation has long been an active area of research. In this work, we show that compressive sensing with interpolation may be used to achieve good estimation precision while lowering the sampling frequency. We propose an Interpolating Band-Excluded Orthogonal Matching Pursuit algorithm that uses one of two interpolation functions to estimate the time delay parameter. The numerical results show that interpolation improves estimation precision and that compressive sensing provides an elegant tradeoff that may lower the required sampling frequency while still attaining a desired estimation performance.Comment: 5 pages, 2 figures, technical report supporting 1 page submission for GlobalSIP 201

    R-dimensional ESPRIT-type algorithms for strictly second-order non-circular sources and their performance analysis

    Full text link
    High-resolution parameter estimation algorithms designed to exploit the prior knowledge about incident signals from strictly second-order (SO) non-circular (NC) sources allow for a lower estimation error and can resolve twice as many sources. In this paper, we derive the R-D NC Standard ESPRIT and the R-D NC Unitary ESPRIT algorithms that provide a significantly better performance compared to their original versions for arbitrary source signals. They are applicable to shift-invariant R-D antenna arrays and do not require a centrosymmetric array structure. Moreover, we present a first-order asymptotic performance analysis of the proposed algorithms, which is based on the error in the signal subspace estimate arising from the noise perturbation. The derived expressions for the resulting parameter estimation error are explicit in the noise realizations and asymptotic in the effective signal-to-noise ratio (SNR), i.e., the results become exact for either high SNRs or a large sample size. We also provide mean squared error (MSE) expressions, where only the assumptions of a zero mean and finite SO moments of the noise are required, but no assumptions about its statistics are necessary. As a main result, we analytically prove that the asymptotic performance of both R-D NC ESPRIT-type algorithms is identical in the high effective SNR regime. Finally, a case study shows that no improvement from strictly non-circular sources can be achieved in the special case of a single source.Comment: accepted at IEEE Transactions on Signal Processing, 15 pages, 6 figure
    • …
    corecore