373 research outputs found

    Quantitative Mapping of Soil Property Based on Laboratory and Airborne Hyperspectral Data Using Machine Learning

    Get PDF
    Soil visible and near-infrared spectroscopy provides a non-destructive, rapid and low-cost approach to quantify various soil physical and chemical properties based on their reflectance in the spectral range of 400–2500 nm. With an increasing number of large-scale soil spectral libraries established across the world and new space-borne hyperspectral sensors, there is a need to explore methods to extract informative features from reflectance spectra and produce accurate soil spectroscopic models using machine learning. Features generated from regional or large-scale soil spectral data play a key role in the quantitative spectroscopic model for soil properties. The Land Use/Land Cover Area Frame Survey (LUCAS) soil library was used to explore PLS-derived components and fractal features generated from soil spectra in this study. The gradient-boosting method performed well when coupled with extracted features on the estimation of several soil properties. Transfer learning based on convolutional neural networks (CNNs) was proposed to make the model developed from laboratory data transferable for airborne hyperspectral data. The soil clay map was successfully derived using HyMap imagery and the fine-tuned CNN model developed from LUCAS mineral soils, as deep learning has the potential to learn transferable features that generalise from the source domain to target domain. The external environmental factors like the presence of vegetation restrain the application of imaging spectroscopy. The reflectance data can be transformed into a vegetation suppressed domain with a force invariance approach, the performance of which was evaluated in an agricultural area using CASI airborne hyperspectral data. However, the relationship between vegetation and acquired spectra is complicated, and more efforts should put on removing the effects of external factors to make the model transferable from one sensor to another.:Abstract I Kurzfassung III Table of Contents V List of Figures IX List of Tables XIII List of Abbreviations XV 1 Introduction 1 1.1 Motivation 1 1.2 Soil spectra from different platforms 2 1.3 Soil property quantification using spectral data 4 1.4 Feature representation of soil spectra 5 1.5 Objectives 6 1.6 Thesis structure 7 2 Combining Partial Least Squares and the Gradient-Boosting Method for Soil Property Retrieval Using Visible Near-Infrared Shortwave Infrared Spectra 9 2.1 Abstract 10 2.2 Introduction 10 2.3 Materials and methods 13 2.3.1 The LUCAS soil spectral library 13 2.3.2 Partial least squares algorithm 15 2.3.3 Gradient-Boosted Decision Trees 15 2.3.4 Calculation of relative variable importance 16 2.3.5 Assessment 17 2.4 Results 17 2.4.1 Overview of the spectral measurement 17 2.4.2 Results of PLS regression for the estimation of soil properties 19 2.4.3 Results of PLS-GBDT for the estimation of soil properties 21 2.4.4 Relative important variables derived from PLS regression and the gradient-boosting method 24 2.5 Discussion 28 2.5.1 Dimension reduction for high-dimensional soil spectra 28 2.5.2 GBDT for quantitative soil spectroscopic modelling 29 2.6 Conclusions 30 3 Quantitative Retrieval of Organic Soil Properties from Visible Near-Infrared Shortwave Infrared Spectroscopy Using Fractal-Based Feature Extraction 31 3.1 Abstract 32 3.2 Introduction 32 3.3 Materials and Methods 35 3.3.1 The LUCAS topsoil dataset 35 3.3.2 Fractal feature extraction method 37 3.3.3 Gradient-boosting regression model 37 3.3.4 Evaluation 41 3.4 Results 42 3.4.1 Fractal features for soil spectroscopy 42 3.4.2 Effects of different step and window size on extracted fractal features 45 3.4.3 Modelling soil properties with fractal features 47 3.4.3 Comparison with PLS regression 49 3.5 Discussion 51 3.5.1 The importance of fractal dimension for soil spectra 51 3.5.2 Modelling soil properties with fractal features 52 3.6 Conclusions 53 4 Transfer Learning for Soil Spectroscopy Based on Convolutional Neural Networks and Its Application in Soil Clay Content Mapping Using Hyperspectral Imagery 55 4.1 Abstract 55 4.2 Introduction 56 4.3 Materials and Methods 59 4.3.1 Datasets 59 4.3.2 Methods 62 4.3.3 Assessment 67 4.4 Results and Discussion 67 4.4.1 Interpretation of mineral and organic soils from LUCAS dataset 67 4.4.2 1D-CNN and spectral index for LUCAS soil clay content estimation 69 4.4.3 Application of transfer learning for soil clay content mapping using the pre-trained 1D-CNN model 72 4.4.4 Comparison between spectral index and transfer learning 74 4.4.5 Large-scale soil spectral library for digital soil mapping at the local scale using hyperspectral imagery 75 4.5 Conclusions 75 5 A Case Study of Forced Invariance Approach for Soil Salinity Estimation in Vegetation-Covered Terrain Using Airborne Hyperspectral Imagery 77 5.1 Abstract 78 5.2 Introduction 78 5.3 Materials and Methods 81 5.3.1 Study area of Zhangye Oasis 81 5.3.2 Data description 82 5.3.3 Methods 83 5.3.3 Model performance assessment 85 5.4 Results and Discussion 86 5.4.1 The correlation between NDVI and soil salinity 86 5.4.2 Vegetation suppression performance using the Forced Invariance Approach 86 5.4.3 Estimation of soil properties using airborne hyperspectral data 88 5.5 Conclusions 90 6 Conclusions and Outlook 93 Bibliography 97 Acknowledgements 11

    Ny forståelse av gasshydratfenomener og naturlige inhibitorer i råoljesystemer gjennom massespektrometri og maskinlæring

    Get PDF
    Gas hydrates represent one of the main flow assurance issues in the oil and gas industry as they can cause complete blockage of pipelines and process equipment, forcing shut downs. Previous studies have shown that some crude oils form hydrates that do not agglomerate or deposit, but remain as transportable dispersions. This is commonly believed to be due to naturally occurring components present in the crude oil, however, despite decades of research, their exact structures have not yet been determined. Some studies have suggested that these components are present in the acid fractions of the oils or are related to the asphaltene content of the oils. Crude oils are among the worlds most complex organic mixtures and can contain up to 100 000 different constituents, making them difficult to characterise using traditional mass spectrometers. The high mass accuracy of Fourier Transform Ion Cyclotron Resonance Mass Spectrometry (FT-ICR MS) yields a resolution greater than traditional techniques, making FT-ICR MS able to characterise crude oils to a greater extent, and possibly identify hydrate active components. FT-ICR MS spectra usually contain tens of thousands of peaks, and data treatment methods able to find underlying relationships in big data sets are required. Machine learning and multivariate statistics include many methods suitable for big data. A literature review identified a number of promising methods, and the current status for the use of machine learning for analysis of gas hydrates and FT-ICR MS data was analysed. The literature study revealed that although many studies have used machine learning to predict thermodynamic properties of gas hydrates, very little work have been done in analysing gas hydrate related samples measured by FT-ICR MS. In order to aid their identification, a successive accumulation procedure for increasing the concentrations of hydrate active components was developed by SINTEF. Comparison of the mass spectra from spiked and unspiked samples revealed some peaks that increased in intensity over the spiking levels. Several classification methods were used in combination with variable selection, and peaks related to hydrate formation were identified. The corresponding molecular formulas were determined, and the peaks were assumed to be related to asphaltenes, naphthenes and polyethylene glycol. To aid the characterisation of the oils, infrared spectroscopy (both Fourier Transform infrared and near infrared) was combined with FT-ICR MS in a multiblock analysis to predict the density of crude oils. Two different strategies for data fusion were attempted, and sequential fusion of the blocks achieved the highest prediction accuracy both before and after reducing the dimensions of the data sets by variable selection. As crude oils have such complex matrixes, samples are often very different, and many methods are not able to handle high degrees of variations or non-linearities between the samples. Hierarchical cluster-based partial least squares regression (HC-PLSR) clusters the data and builds local models within each cluster. HC-PLSR can thus handle non-linearities between clusters, but as PLSR is a linear model the data is still required to be locally linear. HC-PLSR was therefore expanded into deep learning (HC-CNN and HC-RNN) and SVR (HC-SVR). The deep learning-based models outperformed HC-PLSR for a data set predicting average molecular weights from hydrolysed raw materials. The analysis of the FT-ICR MS spectra revealed that the large amounts of information contained in the data (due to the high resolution) can disturb the predictive models, but the use of variable selection counteracts this effect. Several methods from machine learning and multivariate statistics were proven valuable for prediction of various parameters from FT-ICR MS using both classification and regression methods.Gasshydrater er et av hovedproblemene for Flow assurance i olje- og gassnæringen ettersom at de kan forårsake blokkeringer i oljerørledninger og prosessutstyr som krever at systemet må stenges ned. Tidligere studier har vist at noen råoljer danner hydrater som ikke agglomererer eller avsetter, men som forblir som transporterbare dispersjoner. Dette antas å være på grunn av naturlig forekommende komponenter til stede i råoljen, men til tross for årevis med forskning er deres nøyaktige strukturer enda ikke bestemt i detalj. Noen studier har indikert at disse komponentene kan stamme fra syrefraksjonene i oljen eller være relatert til asfalteninnholdet i oljene. Råoljer er blant verdens mest komplekse organiske blandinger og kan inneholde opptil 100 000 forskjellige bestanddeler, som gjør dem vanskelig å karakterisere ved bruk av tradisjonelle massespektrometre. Den høye masseoppløsningen Fourier-transform ion syklotron resonans massespektrometri (FT-ICR MS) gir en høyere oppløsning enn tradisjonelle teknikker, som gjør FT-ICR MS i stand til å karakterisere råoljer i større grad og muligens identifisere hydrataktive komponenter. FT-ICR MS spektre inneholder vanligvis titusenvis av topper, og det er nødvendig å bruke databehandlingsmetoder i stand til å håndtere store datasett, med muligheter til å finne underliggende forhold for å analysere spektrene. Maskinlæring og multivariat statistikk har mange metoder som er passende for store datasett. En litteratur studie identifiserte flere metoder og den nåværende statusen for bruken av maskinlæring for analyse av gasshydrater og FT-ICR MS data. Litteraturstudien viste at selv om mange studier har brukt maskinlæring til å predikere termodynamiske egenskaper for gasshydrater, har lite arbeid blitt gjort med å analysere gasshydrat relaterte prøver målt med FT-ICR MS. For å bistå identifikasjonen ble en suksessiv akkumuleringsprosedyre for å øke konsentrasjonene av hydrataktive komponenter utviklet av SINTEF. Sammenligninger av massespektrene fra spikede og uspikede prøver viste at noen topper økte sammen med spikingnivåene. Flere klassifikasjonsmetoder ble brukt i kombinasjon med ariabelseleksjon for å identifisere topper relatert til hydratformasjon. Molekylformler ble bestemt og toppene ble antatt å være relatert til asfaltener, naftener og polyetylenglykol. For å bistå karakteriseringen av oljene ble infrarød spektroskopi inkludert med FT-ICR MS i en multiblokk analyse for å predikere tettheten til råoljene. To forskjellige strategier for datafusjonering ble testet og sekvensiell fusjonering av blokkene oppnådde den høyeste prediksjonsnøyaktigheten både før og etter reduksjon av datasettene med bruk av variabelseleksjon. Ettersom råoljer har så kompleks sammensetning, er prøvene ofte veldig forskjellige og mange metoder er ikke egnet for å håndtere store variasjoner eller ikke-lineariteter mellom prøvene. Hierarchical cluster-based partial least squares regression (HCPLSR) grupperer dataene og lager lokale modeller for hver gruppe. HC-PLSR kan dermed håndtere ikke-lineariteter mellom gruppene, men siden PLSR er en lokal modell må dataene fortsatt være lokalt lineære. HC-PLSR ble derfor utvidet til convolutional neural networks (HC-CNN) og recurrent neural networks (HC-RNN) og support vector regression (HC-SVR). Disse dyp læring metodene utkonkurrerte HC-PLSR for et datasett som predikerte gjennomsnittlig molekylvekt fra hydrolyserte råmaterialer. Analysen av FT-ICR MS spektre viste at spektrene inneholder veldig mye informasjon. Disse store mengdene med data kan forstyrre prediksjonsmodeller, men bruken av variabelseleksjon motvirket denne effekten. Flere metoder fra maskinlæring og multivariat statistikk har blitt vist å være nyttige for prediksjon av flere parametere from FT-ICR MS data ved bruk av både klassifisering og regresjon

    Utilizing the SHAP framework to bypass intrusion detection systems

    Get PDF
    The number of people connected to the internet is swiftly growing, and technology is increasingly integrated into our daily lives. With this increase, there is a surge of attacks towards the digital infrastructure. It is of great importance to understand how we can analyze and mitigate attacks to ensure the availability of the services we depend on. The purpose of this study is two-sided. The first is to evaluate different machine learning models in intrusion detection systems. We measured their performance on distributed denial of service(DDoS) attacks and explained them using SHAP values. Secondly, by using the SHAP values, we found the most important features and generated multiple variations of the same attacks to see how the different models reacted. Ultimately, we found that SHAP values have great potential as a base for generating more sophisticated attacks. In turn, the modified attacks were able to bypass intrusion detection systems.Masteroppgave i informatikkINF399MAMN-PROGMAMN-IN

    A review of machine learning applications in wildfire science and management

    Full text link
    Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.Comment: 83 pages, 4 figures, 3 table

    Defect detection in infrared thermography by deep learning algorithms

    Get PDF
    L'évaluation non destructive (END) est un domaine permettant d'identifier tous les types de dommages structurels dans un objet d'intérêt sans appliquer de dommages et de modifications permanents. Ce domaine fait l'objet de recherches intensives depuis de nombreuses années. La thermographie infrarouge (IR) est l'une des technologies d'évaluation non destructive qui permet d'inspecter, de caractériser et d'analyser les défauts sur la base d'images infrarouges (séquences) provenant de l'enregistrement de l'émission et de la réflexion de la lumière infrarouge afin d'évaluer les objets non autochauffants pour le contrôle de la qualité et l'assurance de la sécurité. Ces dernières années, le domaine de l'apprentissage profond de l'intelligence artificielle a fait des progrès remarquables dans les applications de traitement d'images. Ce domaine a montré sa capacité à surmonter la plupart des inconvénients des autres approches existantes auparavant dans un grand nombre d'applications. Cependant, en raison de l'insuffisance des données d'entraînement, les algorithmes d'apprentissage profond restent encore inexplorés, et seules quelques publications font état de leur application à l'évaluation non destructive de la thermographie (TNDE). Les algorithmes d'apprentissage profond intelligents et hautement automatisés pourraient être couplés à la thermographie infrarouge pour identifier les défauts (dommages) dans les composites, l'acier, etc. avec une confiance et une précision élevée. Parmi les sujets du domaine de recherche TNDE, les techniques d'apprentissage automatique supervisées et non supervisées sont les tâches les plus innovantes et les plus difficiles pour l'analyse de la détection des défauts. Dans ce projet, nous construisons des cadres intégrés pour le traitement des données brutes de la thermographie infrarouge à l'aide d'algorithmes d'apprentissage profond et les points forts des méthodologies proposées sont les suivants: 1. Identification et segmentation automatique des défauts par des algorithmes d'apprentissage profond en thermographie infrarouge. Les réseaux neuronaux convolutifs (CNN) pré-entraînés sont introduits pour capturer les caractéristiques des défauts dans les images thermiques infrarouges afin de mettre en œuvre des modèles basés sur les CNN pour la détection des défauts structurels dans les échantillons composés de matériaux composites (diagnostic des défauts). Plusieurs alternatives de CNNs profonds pour la détection de défauts dans la thermographie infrarouge. Les comparaisons de performance de la détection et de la segmentation automatique des défauts dans la thermographie infrarouge en utilisant différentes méthodes de détection par apprentissage profond : (i) segmentation d'instance (Center-mask ; Mask-RCNN) ; (ii) détection d’objet (Yolo-v3 ; Faster-RCNN) ; (iii) segmentation sémantique (Unet ; Res-unet); 2. Technique d'augmentation des données par la génération de données synthétiques pour réduire le coût des dépenses élevées associées à la collecte de données infrarouges originales dans les composites (composants d'aéronefs.) afin d'enrichir les données de formation pour l'apprentissage des caractéristiques dans TNDE; 3. Le réseau antagoniste génératif (GAN convolutif profond et GAN de Wasserstein) est introduit dans la thermographie infrarouge associée à la thermographie partielle des moindres carrés (PLST) (réseau PLS-GANs) pour l'extraction des caractéristiques visibles des défauts et l'amélioration de la visibilité des défauts pour éliminer le bruit dans la thermographie pulsée; 4. Estimation automatique de la profondeur des défauts (question de la caractérisation) à partir de données infrarouges simulées en utilisant un réseau neuronal récurrent simplifié : Gate Recurrent Unit (GRU) à travers l'apprentissage supervisé par régression.Non-destructive evaluation (NDE) is a field to identify all types of structural damage in an object of interest without applying any permanent damage and modification. This field has been intensively investigated for many years. The infrared thermography (IR) is one of NDE technology through inspecting, characterize and analyzing defects based on the infrared images (sequences) from the recordation of infrared light emission and reflection to evaluate non-self-heating objects for quality control and safety assurance. In recent years, the deep learning field of artificial intelligence has made remarkable progress in image processing applications. This field has shown its ability to overcome most of the disadvantages in other approaches existing previously in a great number of applications. Whereas due to the insufficient training data, deep learning algorithms still remain unexplored, and only few publications involving the application of it for thermography nondestructive evaluation (TNDE). The intelligent and highly automated deep learning algorithms could be coupled with infrared thermography to identify the defect (damages) in composites, steel, etc. with high confidence and accuracy. Among the topics in the TNDE research field, the supervised and unsupervised machine learning techniques both are the most innovative and challenging tasks for defect detection analysis. In this project, we construct integrated frameworks for processing raw data from infrared thermography using deep learning algorithms and highlight of the methodologies proposed include the following: 1. Automatic defect identification and segmentation by deep learning algorithms in infrared thermography. The pre-trained convolutional neural networks (CNNs) are introduced to capture defect feature in infrared thermal images to implement CNNs based models for the detection of structural defects in samples made of composite materials (fault diagnosis). Several alternatives of deep CNNs for the detection of defects in the Infrared thermography. The comparisons of performance of the automatic defect detection and segmentation in infrared thermography using different deep learning detection methods: (i) instance segmentation (Center-mask; Mask-RCNN); (ii) objective location (Yolo-v3; Faster-RCNN); (iii) semantic segmentation (Unet; Res-unet); 2. Data augmentation technique through synthetic data generation to reduce the cost of high expense associated with the collection of original infrared data in the composites (aircraft components.) to enrich training data for feature learning in TNDE; 3. The generative adversarial network (Deep convolutional GAN and Wasserstein GAN) is introduced to the infrared thermography associated with partial least square thermography (PLST) (PLS-GANs network) for visible feature extraction of defects and enhancement of the visibility of defects to remove noise in Pulsed thermography; 4. Automatic defect depth estimation (Characterization issue) from simulated infrared data using a simplified recurrent neural network: Gate Recurrent Unit (GRU) through the regression supervised learning

    Forecasting solar photosynthetic photon flux density under cloud cover effects: novel predictive model using convolutional neural network integrated with long short-term memory network

    Get PDF
    Forecast models of solar radiation incorporating cloud effects are useful tools to evaluate the impact of stochastic behaviour of cloud movement, real-time integration of photovoltaic energy in power grids, skin cancer and eye disease risk minimisation through solar ultraviolet (UV) index prediction and bio-photosynthetic processes through the modelling of solar photosynthetic photon flux density (PPFD). This research has developed deep learning hybrid model (i.e., CNN-LSTM) to factor in role of cloud effects integrating the merits of convolutional neural networks with long short-term memory networks to forecast near real-time (i.e., 5-min) PPFD in a sub-tropical region Queensland, Australia. The prescribed CLSTM model is trained with real-time sky images that depict stochastic cloud movements captured through a total sky imager (TSI-440) utilising advanced sky image segmentation to reveal cloud chromatic features into their statistical values, and to purposely factor in the cloud variation to optimise the CLSTM model. The model, with its competing algorithms (i.e., CNN, LSTM, deep neural network, extreme learning machine and multivariate adaptive regression spline), are trained with 17 distinct cloud cover inputs considering the chromaticity of red, blue, thin, and opaque cloud statistics, supplemented by solar zenith angle (SZA) to predict short-term PPFD. The models developed with cloud inputs yield accurate results, outperforming the SZA-based models while the best testing performance is recorded by the objective method (i.e., CLSTM) tested over a 7-day measurement period. Specifically, CLSTM yields a testing performance with correlation coefficient r = 0.92, root mean square error RMSE = 210.31 μ mol of photons m−2 s−1, mean absolute error MAE = 150.24 μ mol of photons m−2 s−1, including a relative error of RRMSE = 24.92% MAPE = 38.01%, and Nash Sutcliffe’s coefficient ENS = 0.85, and Legate and McCabe’s Index LM = 0.68 using cloud cover in addition to the SZA as an input. The study shows the importance of cloud inclusion in forecasting solar radiation and evaluating the risk with practical implications in monitoring solar energy, greenhouses and high-value agricultural operations affected by stochastic behaviour of clouds. Additional methodological refinements such as retraining the CLSTM model for hourly and seasonal time scales may aid in the promotion of agricultural crop farming and environmental risk evaluation applications such as predicting the solar UV index and direct normal solar irradiance for renewable energy monitoring systems

    Deep architectures for feature extraction and generative modeling

    Get PDF
    • …
    corecore