4,412 research outputs found

    Detection of Marine Plastic Debris in the North Pacific Ocean using Optical Satellite Imagery

    Get PDF
    Plastic pollution is ubiquitous across marine environments, yet detection of anthropogenic debris in the global oceans is in its infancy. Here, we exploit high-resolution multispectral satellite imagery over the North Pacific Ocean and information from GPS-tracked floating plastic conglomerates to explore the potential for detecting marine plastic debris via spaceborne remote sensing platforms. Through an innovative method of estimating material abundance in mixed pixels, combined with an inverse spectral unmixing calculation, a spectral signature of aggregated plastic litter was derived from an 8-band WorldView-2 image. By leveraging the spectral characteristics of marine plastic debris in a real environment, plastic detectability was demonstrated and evaluated utilising a Spectral Angle Mapper (SAM) classification, Mixture Tuned Matched Filtering (MTMF), the Reed-Xiaoli Detector (RXD) algorithm, and spectral indices in a three-variable feature space. Results indicate that floating aggregations are detectable on sub-pixel scales, but as reliable ground truth information was restricted to a single confirmed target, detections were only validated by means of their respective spectral responses. Effects of atmospheric correction algorithms were evaluated using ACOLITE, ACOMP, and FLAASH, in which derived unbiased percentage differences ranged from 1% to 81% following a pairwise comparison. Building first steps towards an integrated marine monitoring system, the strengths and limitations of current remote sensing technology are identified and adopted to make suggestions for future improvements

    System Design Considerations for a Low-Intensity Hyperspectral Imager of Sensitive Cultural Heritage Manuscripts

    Get PDF
    Cultural heritage imaging is becoming more common with the increased availability of more complex imaging systems, including multi- and hyperspectral imaging (MSI and HSI) systems. A particular concern with HSI systems is the broadband source required, regularly including infrared and ultraviolet spectra, which may cause fading or damage to a target. Guidelines for illumination of such objects, even while on display at a museum, vary widely from one another. Standards must be followed to assure the curator to allow imaging and ensure protection of the document. Building trust in the cultural heritage community is key to gaining access to objects of significant import, thus allowing scientists, historians, and the public to view digitally preserved representations of the object, and to allow further discovery of the object through spectral processing and analysis. Imaging was conducted with a light level of 270 lux at variable ground sample distances (GSD’s). The light level was chosen to maintain a total dose similar to an hour’s display time at a museum, based on the United Kingdom standard for cultural heritage display, PAS 198:2012. The varying GSD was used as a variable to increase signal-to-noise ratios (SNR) or decrease total illumination time on a target. This adjustment was performed both digitally and physically, and typically results in a decrease in image quality, as the spatial resolution of the image decreases. However, a technique called “panchromatic sharpening” was used to recover some of the spatial resolution. This method fuses a panchromatic image with good spatial resolution with a spectral image (either MSI or HSI) with poorer spatial resolution to construct a derivative spectral image with improved spatial resolution. Detector systems and additional methods of data capture to assist in processing of cultural heritage documents are investigated, with specific focus on preserving the physical condition of the potentially sensitive documents

    Radiometrically-Accurate Hyperspectral Data Sharpening

    Get PDF
    Improving the spatial resolution of hyperpsectral image (HSI) has traditionally been an important topic in the field of remote sensing. Many approaches have been proposed based on various theories including component substitution, multiresolution analysis, spectral unmixing, Bayesian probability, and tensor representation. However, these methods have some common disadvantages, such as that they are not robust to different up-scale ratios and they have little concern for the per-pixel radiometric accuracy of the sharpened image. Moreover, many learning-based methods have been proposed through decades of innovations, but most of them require a large set of training pairs, which is unpractical for many real problems. To solve these problems, we firstly proposed an unsupervised Laplacian Pyramid Fusion Network (LPFNet) to generate a radiometrically-accurate high-resolution HSI. First, with the low-resolution hyperspectral image (LR-HSI) and the high-resolution multispectral image (HR-MSI), the preliminary high-resolution hyperspectral image (HR-HSI) is calculated via linear regression. Next, the high-frequency details of the preliminary HR-HSI are estimated via the subtraction between it and the CNN-generated-blurry version. By injecting the details to the output of the generative CNN with the low-resolution hyperspectral image (LR-HSI) as input, the final HR-HSI is obtained. LPFNet is designed for fusing the LR-HSI and HR-MSI covers the same Visible-Near-Infrared (VNIR) bands, while the short-wave infrared (SWIR) bands of HSI are ignored. SWIR bands are equally important to VNIR bands, but their spatial details are more challenging to be enhanced because the HR-MSI, used to provide the spatial details in the fusion process, usually has no SWIR coverage or lower-spatial-resolution SWIR. To this end, we designed an unsupervised cascade fusion network (UCFNet) to sharpen the Vis-NIR-SWIR LR-HSI. First, the preliminary high-resolution VNIR hyperspectral image (HR-VNIR-HSI) is obtained with a conventional hyperspectral algorithm. Then, the HR-MSI, the preliminary HR-VNIR-HSI, and the LR-SWIR-HSI are passed to the generative convolutional neural network to produce an HR-HSI. In the training process, the cascade sharpening method is employed to improve stability. Furthermore, the self-supervising loss is introduced based on the cascade strategy to further improve the spectral accuracy. Experiments are conducted on both LPFNet and UCFNet with different datasets and up-scale ratios. Also, state-of-the-art baseline methods are implemented and compared with the proposed methods with different quantitative metrics. Results demonstrate that proposed methods outperform the competitors in all cases in terms of spectral and spatial accuracy

    Wine tasting: a neurophysiological measure of taste and olfaction interaction in the experience

    Get PDF
    In the last years have been provided evidences of sensory–sensory connectivity and influences of one modality over primary sensory cortex of another, a phenomena called crossmodality. Typically, for the wine tasting, sommeliers in addition to the use of the gustation, by the introduction of the wine into the mouth, employ the stimulation of the olfactory system both through a direct olfactory stimulation (by the nose) and a retro-nasal pathway (inhaling air while swirling the wine around in the mouth). Aim of the present study was to investigate the reaction to the wine gustation with and without the direct olfactory contribution, through an electroencephalographic index of approach or withdrawal (AW) motivation, and an autonomic index (Emotional Index – EI), deriving from the matching of heart rate and galvanic skin response activity and considered an indicator of emotional involvement. Results showed a statistically significant increase of the EI values in correspondence of wine tasting with the olfactory component (p<0.01) in comparison to the tasting without the direct olfactory contribution, and a trend of greater approach attitude was reported for the same condition. Data suggest an interaction of the two sensory modalities influencing the emotional and the cognitive aspects of wine tasting experience in a non-expert sampl

    An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles

    Get PDF
    Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms

    W-NetPan: Double-U network for inter-sensor self-supervised pan-sharpening

    Get PDF
    The increasing availability of remote sensing data allows dealing with spatial-spectral limitations by means of pan-sharpening methods. However, fusing inter-sensor data poses important challenges, in terms of resolution differences, sensor-dependent deformations and ground-truth data availability, that demand more accurate pan-sharpening solutions. In response, this paper proposes a novel deep learning-based pan-sharpening model which is termed as the double-U network for self-supervised pan-sharpening (W-NetPan). In more details, the proposed architecture adopts an innovative W-shape that integrates two U-Net segments which sequentially work for spatially matching and fusing inter-sensor multi-modal data. In this way, a synergic effect is produced where the first segment resolves inter-sensor deviations while stimulating the second one to achieve a more accurate data fusion. Additionally, a joint loss formulation is proposed for effectively training the proposed model without external data supervision. The experimental comparison, conducted over four coupled Sentinel-2 and Sentinel-3 datasets, reveals the advantages of W-NetPan with respect to several of the most important state-of-the-art pan-sharpening methods available in the literature. The codes related to this paper will be available at https://github.com/rufernan/WNetPan
    • …
    corecore