6,639 research outputs found

    Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks

    Get PDF
    Two model-based algorithms for edge detection in spectral imagery are developed that specifically target capturing intrinsic features such as isoluminant edges that are characterized by a jump in color but not in intensity. Given prior knowledge of the classes of reflectance or emittance spectra associated with candidate objects in a scene, a small set of spectral-band ratios, which most profoundly identify the edge between each pair of materials, are selected to define a edge signature. The bands that form the edge signature are fed into a spatial mask, producing a sparse joint spatiospectral nonlinear operator. The first algorithm achieves edge detection for every material pair by matching the response of the operator at every pixel with the edge signature for the pair of materials. The second algorithm is a classifier-enhanced extension of the first algorithm that adaptively accentuates distinctive features before applying the spatiospectral operator. Both algorithms are extensively verified using spectral imagery from the airborne hyperspectral imager and from a dots-in-a-well midinfrared imager. In both cases, the multicolor gradient (MCG) and the hyperspectral/spatial detection of edges (HySPADE) edge detectors are used as a benchmark for comparison. The results demonstrate that the proposed algorithms outperform the MCG and HySPADE edge detectors in accuracy, especially when isoluminant edges are present. By requiring only a few bands as input to the spatiospectral operator, the algorithms enable significant levels of data compression in band selection. In the presented examples, the required operations per pixel are reduced by a factor of 71 with respect to those required by the MCG edge detector

    Satellite remote sensing reveals a positive impact of living oyster reefs on microalgal biofilm development

    Get PDF
    Satellite remote sensing (RS) is routinely used for the large-scale monitoring of microphytobenthos (MPB) biomass in intertidal mudflats and has greatly improved our knowledge of MPB spatio-temporal variability and its potential drivers. Processes operating on smaller scales however, such as the impact of benthic macrofauna on MPB development, to date remain underinvestigated. In this study, we analysed the influence of wild Crassostrea gigas oyster reefs on MPB biofilm development using multispectral RS. A 30-year time series (1985-2015) combining high-resolution (30 m) Landsat and SPOT data was built in order to explore the relationship between C. gigas reefs and MPB spatial distribution and seasonal dynamics, using the normalized difference vegetation index (NDVI). Emphasis was placed on the analysis of a before-after control-impact (BACI) experiment designed to assess the effect of oyster killing on the surrounding MPB biofilms. Our RS data reveal that the presence of oyster reefs positively affects MPB biofilm development. Analysis of the historical time series first showed the presence of persistent, highly concentrated MPB patches around oyster reefs. This observation was supported by the BACI experiment which showed that killing the oysters (while leaving the physical reef structure, i.e. oyster shells, intact) negatively affected both MPB biofilm biomass and spatial stability around the reef. As such, our results are consistent with the hypothesis of nutrient input as an explanation for the MPB growth-promoting effect of oysters, whereby organic and inorganic matter released through oyster excretion and biodeposition stimulates MPB biomass accumulation. MPB also showed marked seasonal variations in biomass and patch shape, size and degree of aggregation around the oyster reefs. Seasonal variations in biomass, with higher NDVI during spring and autumn, were consistent with those observed on broader scales in other European mudflats. Our study provides the first multi-sensor RS satellite evidence of the promoting and structuring effect of oyster reefs on MPB biofilms

    Hyperspectral video restoration using optical flow and sparse coding

    No full text
    Hyperspectral video acquisition is a trade-off between spectral and temporal resolution. We present an algorithm for recovering dense hyperspectral video of dynamic scenes from a few measured multispectral bands per frame using optical flow and sparse coding. Different set of bands are measured in each video frame and optical flow is used to register them. Optical flow errors are corrected by exploiting sparsity in the spectra and the spatial correlation between images of a scene at different wavelengths. A redundant dictionary of atoms is learned that can sparsely approximate training spectra. The restoration of correct spectra is formulated as an ℓ1 convex optimization problem that minimizes a Mahalanobis-like weighted distance between the restored and corrupt signals as well as the restored signal and the median of the eight connected neighbours of the corrupt signal such that the restored signal is a sparse linear combination of the dictionary atoms. Spectral restoration is followed by spatial restoration using a guided dictionary approach where one dictionary is learned for measured bands and another for a band that is to be spatially restored. By constraining the sparse coding coefficients of both dictionaries to be the same, the restoration of corrupt band is guided by the more reliable measured bands. Experiments on real data and comparison with an existing volumetric image denoising technique shows the superiority of our algorithm

    4DGVF-based filtering of vector-valued images

    Get PDF
    International audienceIn this paper, we propose a new method for vector-valued image restoration in order to reduce noise while simultaneously sharpening vector edges. Our approach is a coupled anisotropic diffusion and shock filtering scheme that exploits a new robust 4DGVF vector field tailored for vector-valued images. The proposed scheme sharpens edges in directions diffused from the entire spatio-spectral information available with a more precise and a more stable sharpening effect along the iterative processing. We validate our method on color images as well as on realistic simulations of dynamic PET images

    Large Scale 3D Image Reconstruction in Optical Interferometry

    Full text link
    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phases, which help to better constrain the polychromatic reconstruction. Simulations on synthetic data illustrate the efficiency of the algorithm.Comment: EUSIPCO, Aug 2015, NICE, Franc
    corecore