2,058 research outputs found

    Supervised machine learning algorithms for ground motion time series classification from InSAR data

    Get PDF
    The increasing availability of Synthetic Aperture Radar (SAR) images facilitates the genera- tion of rich Differential Interferometric SAR (DInSAR) data. Temporal analysis of DInSAR products, and in particular deformation Time Series (TS), enables advanced investigations for ground deforma- tion identification. Machine Learning algorithms offer efficient tools for classifying large volumes of data. In this study, we train supervised Machine Learning models using 5000 reference samples of three datasets to classify DInSAR TS in five deformation trends: Stable, Linear, Quadratic, Bilinear, and Phase Unwrapping Error. General statistics and advanced features are also computed from TS to assess the classification performance. The proposed methods reported accuracy values greater than 0.90, whereas the customized features significantly increased the performance. Besides, the importance of customized features was analysed in order to identify the most effective features in TS classification. The proposed models were also tested on 15000 unlabelled data and compared to a model-based method to validate their reliability. Random Forest and Extreme Gradient Boosting could accurately classify reference samples and positively assign correct labels to random samples. This study indicates the efficiency of Machine Learning models in the classification and management of DInSAR TSs, along with shortcomings of the proposed models in classification of nonmoving targets (i.e., false alarm rate) and a decreasing accuracy for shorter TS.This work is part of the Spanish Grant SARAI, PID2020-116540RB-C21, funded by MCIN/ AEI/10.13039/501100011033. Additionally, it has been supported by the European Regional Devel- opment Fund (ERDF) through the project “RISKCOAST” (SOE3/P4/E0868) of the Interreg SUDOE Programme. Additionally, this work has been co-funded by the European Union Civil Protection through the H2020 project RASTOOL (UCPM-2021-PP-101048474).Peer ReviewedPostprint (published version

    Dynamical spectral unmixing of multitemporal hyperspectral images

    Full text link
    In this paper, we consider the problem of unmixing a time series of hyperspectral images. We propose a dynamical model based on linear mixing processes at each time instant. The spectral signatures and fractional abundances of the pure materials in the scene are seen as latent variables, and assumed to follow a general dynamical structure. Based on a simplified version of this model, we derive an efficient spectral unmixing algorithm to estimate the latent variables by performing alternating minimizations. The performance of the proposed approach is demonstrated on synthetic and real multitemporal hyperspectral images.Comment: 13 pages, 10 figure

    Video Processing with Additional Information

    Get PDF
    Cameras are frequently deployed along with many additional sensors in aerial and ground-based platforms. Many video datasets have metadata containing measurements from inertial sensors, GPS units, etc. Hence the development of better video processing algorithms using additional information attains special significance. We first describe an intensity-based algorithm for stabilizing low resolution and low quality aerial videos. The primary contribution is the idea of minimizing the discrepancy in the intensity of selected pixels between two images. This is an application of inverse compositional alignment for registering images of low resolution and low quality, for which minimizing the intensity difference over salient pixels with high gradients results in faster and better convergence than when using all the pixels. Secondly, we describe a feature-based method for stabilization of aerial videos and segmentation of small moving objects. We use the coherency of background motion to jointly track features through the sequence. This enables accurate tracking of large numbers of features in the presence of repetitive texture, lack of well conditioned feature windows etc. We incorporate the segmentation problem within the joint feature tracking framework and propose the first combined joint-tracking and segmentation algorithm. The proposed approach enables highly accurate tracking, and segmentation of feature tracks that is used in a MAP-MRF framework for obtaining dense pixelwise labeling of the scene. We demonstrate competitive moving object detection in challenging video sequences of the VIVID dataset containing moving vehicles and humans that are small enough to cause background subtraction approaches to fail. Structure from Motion (SfM) has matured to a stage, where the emphasis is on developing fast, scalable and robust algorithms for large reconstruction problems. The availability of additional sensors such as inertial units and GPS along with video cameras motivate the development of SfM algorithms that leverage these additional measurements. In the third part, we study the benefits of the availability of a specific form of additional information - the vertical direction (gravity) and the height of the camera both of which can be conveniently measured using inertial sensors, and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The proposed SfM algorithm is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research dataset, and several other datasets

    Worldwide Weather Forecasting by Deep Learning

    Get PDF
    La prévision météorologique a été et demeure une tâche ardue ayant été approchée sous plusieurs angles au fil des années. Puisque les modèles proéminents récents sont souvent des modèles d’appentissage machine, l’importance de la disponibilité, de la quantité et de la qualité des données météorologiques augmente. De plus, la revue des proéminents modèles d’apprentissage profond appliqués à la prédiction de séries chronologiques météorologiques suggère que leur principale limite est la formulation et la structure des données qui leur sont fournies en entrée, ce qui restreint la portée et la complexité des problèmes qu’ils tentent de résoudre. À cet effet, cette recherche fournit une solution, l’algorithme d’interpolation géospatiale SkNNI (interpolation des k plus proches voisins sphérique), pour transformer et structurer les données géospatiales disparates de manière à les rendre utiles pour entraîner des modèles prédictifs. SkNNI se démarque des algorithmes d’interpolation géospatiale communs, principalement de par sa forte robustesse aux données d’observation bruitées ainsi que sa considération accrue des voisinages d’interpolation. De surcroît, à travers la conception, l’entraînement et l’évaluation de l’architecture de réseau de neurones profond DeltaNet, cette recherche démontre la faisabilité et le potentiel de la prédiction météorologique multidimensionnelle mondiale par apprentissage profond. Cette approche fait usage de SkNNI pour prétraiter les données météorologiques en les transformant en cartes géospatiales à multiples canaux météorologiques qui sont organisées et utilisées en tant qu’éléments de séries chronologiques. Ce faisant, le recours à de telles cartes géospatiales ouvre de nouveaux horizons quant à la définition et à la résolution de problèmes de prévisions géospatiales (p. ex. météorologiques) plus complexes. ----------ABSTRACT: Weather forecasting has been and still is a challenging task which has been approached from many angles throughout the years. Since recent state-of-the-art models are often machine learning ones, the importance of weather data availability, quantity and quality rises. Also, the review of prominent deep learning models for weather time series forecasting suggests their main limitation is the formulation and structure of their input data, which restrains the scope and complexity of the problems they attempt to solve. As such, this work provides a solution, the spherical k-nearest neighbors interpolation (SkNNI) algorithm, to transform and structure scattered geospatial data in a way that makes it useful for predictive model training. SkNNI shines when compared to other common geospatial interpolation methods, mainly because of its high robustness to noisy observation data and acute interpolation neighborhood awareness. Furthermore, through the design, training and evaluation of the DeltaNet deep neural network architecture, this work demonstrates the feasibility and potential of multidimensional worldwide weather forecasting by deep learning. This approach leverages SkNNI to preprocess weather data into multi-channel geospatial weather frames, which are then organized and used as time series elements. Thus, working with such geospatial frames opens new avenues to define and solve more complex geospatial (e.g. weather) forecasting problems

    Understanding Graph Data Through Deep Learning Lens

    Get PDF
    Deep neural network models have established themselves as an unparalleled force in the domains of vision, speech and text processing applications in recent years. However, graphs have formed a significant component of data analytics including applications in Internet of Things, social networks, pharmaceuticals and bioinformatics. An important characteristic of these deep learning techniques is their ability to learn the important features which are necessary to excel at a given task, unlike traditional machine learning algorithms which are dependent on handcrafted features. However, there have been comparatively fewer e�orts in deep learning to directly work on graph inputs. Various real-world problems can be easily solved by posing them as a graph analysis problem. Considering the direct impact of the success of graph analysis on business outcomes, importance of studying these complex graph data has increased exponentially over the years. In this thesis, we address three contributions towards understanding graph data: (i) The first contribution seeks to find anomalies in graphs using graphical models; (ii) The second contribution uses deep learning with spatio-temporal random walks to learn representations of graph trajectories (paths) and shows great promise on standard graph datasets; and (iii) The third contribution seeks to propose a novel deep neural network that implicitly models attention to allow for interpretation of graph classification.
    corecore