66 research outputs found

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    Application of machine learning techniques to weather forecasting

    Get PDF
    Weather forecasting is, still today, a human based activity. Although computer simulations play a major role in modelling the state and evolution of the atmosphere, there is a lack of methodologies to automate the interpretation of the information generated by these models. This doctoral thesis explores the use of machine learning methodologies to solve specific problems in meteorology and particularly focuses on the exploration of methodologies to improve the accuracy of numerical weather prediction models using machine learning. The work presented in this manuscript contains two different approaches using machine learning. In the first part, classical methodologies, such as multivariate non-parametric regression and binary trees are explored to perform regression on meteorological data. In this first part, we particularly focus on forecasting wind, where the circular nature of this variable opens interesting challenges for classic machine learning algorithms and techniques. The second part of this thesis, explores the analysis of weather data as a generic structured prediction problem using deep neural networks. Neural networks, such as convolutional and recurrent networks provide a method for capturing the spatial and temporal structure inherent in weather prediction models. This part explores the potential of deep convolutional neural networks in solving difficult problems in meteorology, such as modelling precipitation from basic numerical model fields. The research performed during the completion of this thesis demonstrates that collaboration between the machine learning and meteorology research communities is mutually beneficial and leads to advances in both disciplines. Weather forecasting models and observational data represent unique examples of large (petabytes), structured and high-quality data sets, that the machine learning community demands for developing the next generation of scalable algorithms

    Improving aircraft performance using machine learning: a review

    Full text link
    This review covers the new developments in machine learning (ML) that are impacting the multi-disciplinary area of aerospace engineering, including fundamental fluid dynamics (experimental and numerical), aerodynamics, acoustics, combustion and structural health monitoring. We review the state of the art, gathering the advantages and challenges of ML methods across different aerospace disciplines and provide our view on future opportunities. The basic concepts and the most relevant strategies for ML are presented together with the most relevant applications in aerospace engineering, revealing that ML is improving aircraft performance and that these techniques will have a large impact in the near future

    Scalable computing for earth observation - Application on Sea Ice analysis

    Get PDF
    In recent years, Deep learning (DL) networks have shown considerable improvements and have become a preferred methodology in many different applications. These networks have outperformed other classical techniques, particularly in large data settings. In earth observation from the satellite field, for example, DL algorithms have demonstrated the ability to learn complicated nonlinear relationships in input data accurately. Thus, it contributed to advancement in this field. However, the training process of these networks has heavy computational overheads. The reason is two-fold: The sizable complexity of these networks and the high number of training samples needed to learn all parameters comprising these architectures. Although the quantity of training data enhances the accuracy of the trained models in general, the computational cost may restrict the amount of analysis that can be done. This issue is particularly critical in satellite remote sensing, where a myriad of satellites generate an enormous amount of data daily, and acquiring in-situ ground truth for building a large training dataset is a fundamental prerequisite. This dissertation considers various aspects of deep learning based sea ice monitoring from SAR data. In this application, labeling data is very costly and time-consuming. Also, in some cases, it is not even achievable due to challenges in establishing the required domain knowledge, specifically when it comes to monitoring Arctic Sea ice with Synthetic Aperture Radar (SAR), which is the application domain of this thesis. Because the Arctic is remote, has long dark seasons, and has a very dynamic weather system, the collection of reliable in-situ data is very demanding. In addition to the challenges of interpreting SAR data of sea ice, this issue makes SAR-based sea ice analysis with DL networks a complicated process. We propose novel DL methods to cope with the problems of scarce training data and address the computational cost of the training process. We analyze DL network capabilities based on self-designed architectures and learn strategies, such as transfer learning for sea ice classification. We also address the scarcity of training data by proposing a novel deep semi-supervised learning method based on SAR data for incorporating unlabeled data information into the training process. Finally, a new distributed DL method that can be used in a semi-supervised manner is proposed to address the computational complexity of deep neural network training

    Feature Driven Learning Techniques for 3D Shape Segmentation

    Get PDF
    Segmentation is a fundamental problem in 3D shape analysis and machine learning. The abil-ity to partition a 3D shape into meaningful or functional parts is a vital ingredient of many down stream applications like shape matching, classification and retrieval. Early segmentation methods were based on approaches like fitting primitive shapes to parts or extracting segmen-tations from feature points. However, such methods had limited success on shapes with more complex geometry. Observing this, research began using geometric features to aid the segmen-tation, as certain features (e.g. Shape Diameter Function (SDF)) are less sensitive to complex geometry. This trend was also incorporated in the shift to set-wide segmentations, called co-segmentation, which provides a consistent segmentation throughout a shape dataset, meaning similar parts have the same segment identifier. The idea of co-segmentation is that a set of same class shapes (i.e. chairs) contain more information about the class than a single shape would, which could lead to an overall improvement to the segmentation of the individual shapes. Over the past decade many different approaches of co-segmentation have been explored covering supervised, unsupervised and even user-driven active learning. In each of the areas, there has been widely adopted use of geometric features to aid proposed segmentation algorithms, with each method typically using different combinations of features. The aim of this thesis is to ex-plore these different areas of 3D shape segmentation, perform an analysis of the effectiveness of geometric features in these areas and tackle core issues that currently exist in the literature.Initially, we explore the area of unsupervised segmentation, specifically looking at co-segmentation, and perform an analysis of several different geometric features. Our analysis is intended to compare the different features in a single unsupervised pipeline to evaluate their usefulness and determine their strengths and weaknesses. Our analysis also includes several features that have not yet been explored in unsupervised segmentation but have been shown effective in other areas.Later, with the ever increasing popularity of deep learning, we explore the area of super-vised segmentation and investigate the current state of Neural Network (NN) driven techniques. We specifically observe limitations in the current state-of-the-art and propose a novel Convolu-tional Neural Network (CNN) based method which operates on multi-scale geometric features to gain more information about the shapes being segmented. We also perform an evaluation of several different supervised segmentation methods using the same input features, but with vary-ing complexity of model design. This is intended to see if the more complex models provide a significant performance increase.Lastly, we explore the user-driven area of active learning, to tackle the large amounts of inconsistencies in current ground truth segmentation, which are vital for most segmentation methods. Active learning has been used to great effect for ground truth generation in the past, so we present a novel active learning framework using deep learning and geometric features to assist the user in co-segmentation of a dataset. Our method emphasises segmentation accu-racy while minimising user effort, providing an interactive visualisation for co-segmentation analysis and the application of automated optimisation tools.In this thesis we explore the effectiveness of different geometric features across varying segmentation tasks, providing an in-depth analysis and comparison of state-of-the-art methods

    Application of machine learning techniques to weather forecasting

    Get PDF
    84 p.El pronóstico del tiempo es, incluso hoy en día, una actividad realizada principalmente por humanos. Si bien las simulaciones por computadora desempeñan un papel importante en el modelado del estado y la evolución de la atmósfera, faltan metodologías para automatizar la interpretación de la información generada por estos modelos. Esta tesis doctoral explora el uso de metodologías de aprendizaje automático para resolver problemas específicos en meteorología y haciendo especial énfasis en la exploración de metodologías para mejorar la precisión de los modelos numéricos de predicción del tiempo. El trabajo presentado en este manuscrito comprende dos enfoques diferentes a la aplicación de algoritmos de aprendizaje automático a problemas de predicción meteorológica. En la primera parte, las metodologías clásicas, como la regresión multivariada no paramétrica y los árboles binarios, se utilizan para realizar regresiones en datos meteorológicos. Esta primera parte, está centrada particularmente en el pronóstico del viento, cuya naturaleza circular crea desafíos interesantes para los algoritmos clásicos de aprendizaje automático. La segunda parte de esta tesis explora el análisis de los datos meteorológicos como un problema de predicción estructurado genérico utilizando redes neuronales profundas. Las redes neuronales, como las redes convolucionales y recurrentes, proporcionan un método para capturar la estructura espacial y temporal inherente en los modelos de predicción del tiempo. Esta parte explora el potencial de las redes neuronales convolucionales profundas para resolver problemas difíciles en meteorología, como el modelado de la precipitación a partir de campos de modelos numéricos básicos. La investigación que sustenta esta tesis sirve como un ejemplo de cómo la colaboración entre las comunidades de aprendizaje automático y meteorología puede resultar mutuamente beneficiosa y conducir a avances en ambas disciplinas. Los modelos de pronóstico del tiempo y los datos de observación representan ejemplos únicos de conjuntos de datos grandes (petabytes), estructurados y de alta calidad, que la comunidad de aprendizaje automático exige para desarrollar la próxima generación de algoritmos escalables

    Modern applications of machine learning in quantum sciences

    Get PDF
    In these Lecture Notes, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning

    Modern applications of machine learning in quantum sciences

    Full text link
    In these Lecture Notes, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning.Comment: 268 pages, 87 figures. Comments and feedback are very welcome. Figures and tex files are available at https://github.com/Shmoo137/Lecture-Note

    A review of machine learning applications in wildfire science and management

    Full text link
    Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.Comment: 83 pages, 4 figures, 3 table
    • …
    corecore