974 research outputs found

    Using multispectral sensor WASP-Lite to analyze harmful algal blooms

    Get PDF
    Developing methods to monitor harmful algae is a current research “hot-topic.” One type of algae, the blue-green algae or Cyanobacteria, cause blooms that can lead to a health threat to humans and animals. This research will test the use of a cost effective and temporally efficient method using multispectral remote sensing system, WASPLITE, as a monitor of algal blooms. This airborne system will be optimized to the specific application of detecting Cyanobacteria on optically complex waters. Attempts have been made in the past using existing instruments, e.g., SeaWiFS and Landsat, to provide these data, but our solution can provide more information by using optimally selected bands with very high spatial resolution. To analyze these algal blooms, standard multispectral techniques (such as band ratio, spectral curvature and principal component analysis) were used on the airborne data. These results were compared with ground truth collected concurrently with the airborne over flight. Because of the very high spatial resolution of the system, (0.7 m), compared to many commonly used satellite systems (~30m to 1km), it could be seen that the patchiness of the algae was very high. Difficulties in applying the ground truth were both technical shortcomings and were due to the nature of the algal blooms. Technical issues include the time lag between the ground sample collect and the airborne collect (the water and algae move with time), the drift of the boat during ground sampling (there was no anchor), and the error in the GPS units in both the boat and the plane. The issues due to the nature of water and algae include, sun glint in the imagery, white foam lines created by waves and wind, and most importantly, the patchiness of the algae in the water. Because the ground truth of one sample point per location was not adequate, we could not correlate the ground truth to the imagery. Qualitatively, the images did show a large variation of algae concentration in the water through the principal component analysis. Further, flow-through data from another vessel taken from the same week this research was performed, suggests that the variation that is seen in the imagery is real. Overall, this research shows the difficulties in effectively and accurately performing ground truth measurements to be used to test algorithms and methods that are applied to detecting harmful algae using remotely sensed data. The traditional ground sampling methods failed to capture the spatial variation observed in the image data. With improved techniques we are confident these methods can be used to effectively monitor algal blooms using the high spatial and temporal resolution

    An Analysis of multimodal sensor fusion for target detection in an urban environment

    Get PDF
    This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances

    The Need for Accurate Pre-processing and Data Integration for the Application of Hyperspectral Imaging in Mineral Exploration

    Get PDF
    Die hyperspektrale Bildgebung stellt eine Schlüsseltechnologie in der nicht-invasiven Mineralanalyse dar, sei es im Labormaßstab oder als fernerkundliche Methode. Rasante Entwicklungen im Sensordesign und in der Computertechnik hinsichtlich Miniaturisierung, Bildauflösung und Datenqualität ermöglichen neue Einsatzgebiete in der Erkundung mineralischer Rohstoffe, wie die drohnen-gestützte Datenaufnahme oder digitale Aufschluss- und Bohrkernkartierung. Allgemeingültige Datenverarbeitungsroutinen fehlen jedoch meist und erschweren die Etablierung dieser vielversprechenden Ansätze. Besondere Herausforderungen bestehen hinsichtlich notwendiger radiometrischer und geometrischer Datenkorrekturen, der räumlichen Georeferenzierung sowie der Integration mit anderen Datenquellen. Die vorliegende Arbeit beschreibt innovative Arbeitsabläufe zur Lösung dieser Problemstellungen und demonstriert die Wichtigkeit der einzelnen Schritte. Sie zeigt das Potenzial entsprechend prozessierter spektraler Bilddaten für komplexe Aufgaben in Mineralexploration und Geowissenschaften.Hyperspectral imaging (HSI) is one of the key technologies in current non-invasive material analysis. Recent developments in sensor design and computer technology allow the acquisition and processing of high spectral and spatial resolution datasets. In contrast to active spectroscopic approaches such as X-ray fluorescence or laser-induced breakdown spectroscopy, passive hyperspectral reflectance measurements in the visible and infrared parts of the electromagnetic spectrum are considered rapid, non-destructive, and safe. Compared to true color or multi-spectral imagery, a much larger range and even small compositional changes of substances can be differentiated and analyzed. Applications of hyperspectral reflectance imaging can be found in a wide range of scientific and industrial fields, especially when physically inaccessible or sensitive samples and processes need to be analyzed. In geosciences, this method offers a possibility to obtain spatially continuous compositional information of samples, outcrops, or regions that might be otherwise inaccessible or too large, dangerous, or environmentally valuable for a traditional exploration at reasonable expenditure. Depending on the spectral range and resolution of the deployed sensor, HSI can provide information about the distribution of rock-forming and alteration minerals, specific chemical compounds and ions. Traditional operational applications comprise space-, airborne, and lab-scale measurements with a usually (near-)nadir viewing angle. The diversity of available sensors, in particular the ongoing miniaturization, enables their usage from a wide range of distances and viewing angles on a large variety of platforms. Many recent approaches focus on the application of hyperspectral sensors in an intermediate to close sensor-target distance (one to several hundred meters) between airborne and lab-scale, usually implying exceptional acquisition parameters. These comprise unusual viewing angles as for the imaging of vertical targets, specific geometric and radiometric distortions associated with the deployment of small moving platforms such as unmanned aerial systems (UAS), or extreme size and complexity of data created by large imaging campaigns. Accurate geometric and radiometric data corrections using established methods is often not possible. Another important challenge results from the overall variety of spatial scales, sensors, and viewing angles, which often impedes a combined interpretation of datasets, such as in a 2D geographic information system (GIS). Recent studies mostly referred to work with at least partly uncorrected data that is not able to set the results in a meaningful spatial context. These major unsolved challenges of hyperspectral imaging in mineral exploration initiated the motivation for this work. The core aim is the development of tools that bridge data acquisition and interpretation, by providing full image processing workflows from the acquisition of raw data in the field or lab, to fully corrected, validated and spatially registered at-target reflectance datasets, which are valuable for subsequent spectral analysis, image classification, or fusion in different operational environments at multiple scales. I focus on promising emerging HSI approaches, i.e.: (1) the use of lightweight UAS platforms, (2) mapping of inaccessible vertical outcrops, sometimes at up to several kilometers distance, (3) multi-sensor integration for versatile sample analysis in the near-field or lab-scale, and (4) the combination of reflectance HSI with other spectroscopic methods such as photoluminescence (PL) spectroscopy for the characterization of valuable elements in low-grade ores. In each topic, the state of the art is analyzed, tailored workflows are developed to meet key challenges and the potential of the resulting dataset is showcased on prominent mineral exploration related examples. Combined in a Python toolbox, the developed workflows aim to be versatile in regard to utilized sensors and desired applications

    Illumination Invariant Outdoor Perception

    Get PDF
    This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene

    Illumination Invariant Deep Learning for Hyperspectral Data

    Get PDF
    Motivated by the variability in hyperspectral images due to illumination and the difficulty in acquiring labelled data, this thesis proposes different approaches for learning illumination invariant feature representations and classification models for hyperspectral data captured outdoors, under natural sunlight. The approaches integrate domain knowledge into learning algorithms and hence does not rely on a priori knowledge of atmospheric parameters, additional sensors or large amounts of labelled training data. Hyperspectral sensors record rich semantic information from a scene, making them useful for robotics or remote sensing applications where perception systems are used to gain an understanding of the scene. Images recorded by hyperspectral sensors can, however, be affected to varying degrees by intrinsic factors relating to the sensor itself (keystone, smile, noise, particularly at the limits of the sensed spectral range) but also by extrinsic factors such as the way the scene is illuminated. The appearance of the scene in the image is tied to the incident illumination which is dependent on variables such as the position of the sun, geometry of the surface and the prevailing atmospheric conditions. Effects like shadows can make the appearance and spectral characteristics of identical materials to be significantly different. This degrades the performance of high-level algorithms that use hyperspectral data, such as those that do classification and clustering. If sufficient training data is available, learning algorithms such as neural networks can capture variability in the scene appearance and be trained to compensate for it. Learning algorithms are advantageous for this task because they do not require a priori knowledge of the prevailing atmospheric conditions or data from additional sensors. Labelling of hyperspectral data is, however, difficult and time-consuming, so acquiring enough labelled samples for the learning algorithm to adequately capture the scene appearance is challenging. Hence, there is a need for the development of techniques that are invariant to the effects of illumination that do not require large amounts of labelled data. In this thesis, an approach to learning a representation of hyperspectral data that is invariant to the effects of illumination is proposed. This approach combines a physics-based model of the illumination process with an unsupervised deep learning algorithm, and thus requires no labelled data. Datasets that vary both temporally and spatially are used to compare the proposed approach to other similar state-of-the-art techniques. The results show that the learnt representation is more invariant to shadows in the image and to variations in brightness due to changes in the scene topography or position of the sun in the sky. The results also show that a supervised classifier can predict class labels more accurately and more consistently across time when images are represented using the proposed method. Additionally, this thesis proposes methods to train supervised classification models to be more robust to variations in illumination where only limited amounts of labelled data are available. The transfer of knowledge from well-labelled datasets to poorly labelled datasets for classification is investigated. A method is also proposed for enabling small amounts of labelled samples to capture the variability in spectra across the scene. These samples are then used to train a classifier to be robust to the variability in the data caused by variations in illumination. The results show that these approaches make convolutional neural network classifiers more robust and achieve better performance when there is limited labelled training data. A case study is presented where a pipeline is proposed that incorporates the methods proposed in this thesis for learning robust feature representations and classification models. A scene is clustered using no labelled data. The results show that the pipeline groups the data into clusters that are consistent with the spatial distribution of the classes in the scene as determined from ground truth

    quantitative mapping of clay minerals using airborne imaging spectroscopy new data on mugello italy from sim ga prototypal sensor

    Get PDF
    AbstractThe possibility of using high spectral and spatial resolution remote sensing technologies is becoming increasingly important in the monitoring of soil degradation processes. A high spatial resolution hyperspectral dataset was acquired with the airborne Hyper SIM-GA sensor from Selex Galileo, simultaneously with ground soil spectral signatures and samples collection. A complete mapping procedure was developed using the 2000–2450 nm spectral region, demonstrating that the 2200 absorption band allows the obtainment of reliable maps of the clay content. The correlation achieved between the observed and the predicted values is encouraging for the extensive application of this technique in soil conservation planning and protection actions

    High spatial resolution imaging of methane and other trace gases with the airborne Hyperspectral Thermal Emission Spectrometer (HyTES)

    Get PDF
    Currently large uncertainties exist associated with the attribution and quantification of fugitive emissions of criteria pollutants and greenhouse gases such as methane across large regions and key economic sectors. In this study, data from the airborne Hyperspectral Thermal Emission Spectrometer (HyTES) have been used to develop robust and reliable techniques for the detection and wide-area mapping of emission plumes of methane and other atmospheric trace gas species over challenging and diverse environmental conditions with high spatial resolution that permits direct attribution to sources. HyTES is a pushbroom imaging spectrometer with high spectral resolution (256 bands from 7.5 to 12 µm), wide swath (1–2 km), and high spatial resolution (∼ 2 m at 1 km altitude) that incorporates new thermal infrared (TIR) remote sensing technologies. In this study we introduce a hybrid clutter matched filter (CMF) and plume dilation algorithm applied to HyTES observations to efficiently detect and characterize the spatial structures of individual plumes of CH_4, H_2S, NH_3, NO_2, and SO_2 emitters. The sensitivity and field of regard of HyTES allows rapid and frequent airborne surveys of large areas including facilities not readily accessible from the surface. The HyTES CMF algorithm produces plume intensity images of methane and other gases from strong emission sources. The combination of high spatial resolution and multi-species imaging capability provides source attribution in complex environments. The CMF-based detection of strong emission sources over large areas is a fast and powerful tool needed to focus on more computationally intensive retrieval algorithms to quantify emissions with error estimates, and is useful for expediting mitigation efforts and addressing critical science questions

    Using Lidar to geometrically-constrain signature spaces for physics-based target detection

    Get PDF
    A fundamental task when performing target detection on spectral imagery is ensuring that a target signature is in the same metric domain as the measured spectral data set. Remotely sensed data are typically collected in digital counts and calibrated to radiance. That is, calibrated data have units of spectral radiance, while target signatures in the visible regime are commonly characterized in units of re°ectance. A necessary precursor to running a target detection algorithm is converting the measured scene data and target signature to the same domain. Atmospheric inversion or compensation is a well-known method for transforming mea- sured scene radiance values into the re°ectance domain. While this method may be math- ematically trivial, it is computationally attractive and is most e®ective when illumination conditions are constant across a scene. However, when illumination conditions are not con- stant for a given scene, signi¯cant error may be introduced when applying the same linear inversion globally. In contrast to the inversion methodology, physics-based forward modeling approaches aim to predict the possible ways that a target might appear in a scene using atmospheric and radiometric models. To fully encompass possible target variability due to changing illumination levels, a target vector space is created. In addition to accounting for varying illumination, physics-based model approaches have a distinct advantage in that they can also incorporate target variability due to a variety of other sources, to include adjacency target orientation, and mixed pixels. Increasing the variability of the target vector space may be beneficial in a global sense in that it may allow for the detection of difficult targets, such as shadowed or partially concealed targets. However, it should also be noted that expansion of the target space may introduce unnecessary confusion for a given pixel. Furthermore, traditional physics-based approaches make certain assumptions which may be prudent only when passive, spectral data for a scene are available. Common examples include the assumption of a °at ground plane and pure target pixels. Many of these assumptions may be attributed to the lack of three-dimensional (3D) spatial information for the scene. In the event that 3D spatial information were available, certain assumptions could be levied, allowing accurate geometric information to be fed to the physics-based model on a pixel- by-pixel basis. Doing so may e®ectively constrain the physics-based model, resulting in a pixel-specific target space with optimized variability and minimized confusion. This body of work explores using spatial information from a topographic Light Detection and Ranging (Lidar) system as a means to enhance the delity of physics-based models for spectral target detection. The incorporation of subpixel spatial information, relative to a hyperspectral image (HSI) pixel, provides valuable insight about plausible geometric con¯gurations of a target, background, and illumination sources within a scene. Methods for estimating local geometry on a per-pixel basis are introduced; this spatial information is then fed into a physics-based model to the forward prediction of a target in radiance space. The target detection performance based on this spatially-enhanced, spectral target space is assessed relative to current state-of-the-art spectral algorithms

    High-Resolution and Hyperspectral Data Fusion for Classification

    Get PDF
    corecore