43 research outputs found

    Deep Learning for Remote Sensing Image Processing

    Get PDF
    Remote sensing images have many applications such as ground object detection, environmental change monitoring, urban growth monitoring and natural disaster damage assessment. As of 2019, there were roughly 700 satellites listing “earth observation” as their primary application. Both spatial and temporal resolutions of satellite images have improved consistently in recent years and provided opportunities in resolving fine details on the Earth\u27s surface. In the past decade, deep learning techniques have revolutionized many applications in the field of computer vision but have not fully been explored in remote sensing image processing. In this dissertation, several state-of-the-art deep learning models have been investigated and customized for satellite image processing in the applications of landcover classification and ground object detection. First, a simple and effective Convolutional Neural Network (CNN) model is developed to detect fresh soil from tunnel digging activities near the U.S. and Mexico border by using pansharpened synthetic hyperspectral images. These tunnels’ exits are usually hidden under warehouses and are used for illegal activities, for example, by drug dealers. Detecting fresh soil nearby is an indirect way to search for these tunnels. While multispectral images have been used widely and regularly in remote sensing since the 1970s, with the fast advances in hyperspectral sensors, hyperspectral imagery is becoming popular. A combination of 80 synthetic hyperspectral channels with the original eight multispectral channels collected by the WorldView-2 satellite are used by CNN to detect fresh soil. Experimental results show that detection performance can be significantly improved by the combination of synthetic hyperspectral images with those original multispectral channels. Second, an end-to-end, pixel-level Fully Convolutional Network (FCN) model is implemented to estimate the number of refugee tents in the Rukban area near the Syrian-Jordan border using high-resolution multispectral satellite images collected by WordView-2. Rukban is a desert area crossing the border between Syria and Jordan, and thousands of Syrian refugees have fled into this area since the Syrian civil war in 2014. In the past few years, the number of refugee shelters for the forcibly displaced Syrian refugees in this area has increased rapidly. Estimating the location and number of refugee tents has become a key factor in maintaining the sustainability of the refugee shelter camps. Manually counting the shelters is labor-intensive and sometimes prohibitive given the large quantities. In addition, these shelters/tents are usually small in size, irregular in shape, and sparsely distributed in a very large area and could be easily missed by the traditional image-analysis techniques, making the image-based approaches also challenging. The FCN model is also boosted by transfer learning with the knowledge in the pre-trained VGG-16 model. Experimental results show that the FCN model is very accurate and has less than 2% of error. Last, we investigate the Generative Adversarial Networks (GAN) to augment training data to improve the training of FCN model for refugee tent detection. Segmentation based methods like FCN require a large amount of finely labeled images for training. In practice, this is labor-intensive, time consuming, and tedious. The data-hungry problem is currently a big hurdle for this application. Experimental results show that the GAN model is a better tool as compared to traditional methods for data augmentation. Overall, our research made a significant contribution to remote sensing image processin

    Illumination Invariant Deep Learning for Hyperspectral Data

    Get PDF
    Motivated by the variability in hyperspectral images due to illumination and the difficulty in acquiring labelled data, this thesis proposes different approaches for learning illumination invariant feature representations and classification models for hyperspectral data captured outdoors, under natural sunlight. The approaches integrate domain knowledge into learning algorithms and hence does not rely on a priori knowledge of atmospheric parameters, additional sensors or large amounts of labelled training data. Hyperspectral sensors record rich semantic information from a scene, making them useful for robotics or remote sensing applications where perception systems are used to gain an understanding of the scene. Images recorded by hyperspectral sensors can, however, be affected to varying degrees by intrinsic factors relating to the sensor itself (keystone, smile, noise, particularly at the limits of the sensed spectral range) but also by extrinsic factors such as the way the scene is illuminated. The appearance of the scene in the image is tied to the incident illumination which is dependent on variables such as the position of the sun, geometry of the surface and the prevailing atmospheric conditions. Effects like shadows can make the appearance and spectral characteristics of identical materials to be significantly different. This degrades the performance of high-level algorithms that use hyperspectral data, such as those that do classification and clustering. If sufficient training data is available, learning algorithms such as neural networks can capture variability in the scene appearance and be trained to compensate for it. Learning algorithms are advantageous for this task because they do not require a priori knowledge of the prevailing atmospheric conditions or data from additional sensors. Labelling of hyperspectral data is, however, difficult and time-consuming, so acquiring enough labelled samples for the learning algorithm to adequately capture the scene appearance is challenging. Hence, there is a need for the development of techniques that are invariant to the effects of illumination that do not require large amounts of labelled data. In this thesis, an approach to learning a representation of hyperspectral data that is invariant to the effects of illumination is proposed. This approach combines a physics-based model of the illumination process with an unsupervised deep learning algorithm, and thus requires no labelled data. Datasets that vary both temporally and spatially are used to compare the proposed approach to other similar state-of-the-art techniques. The results show that the learnt representation is more invariant to shadows in the image and to variations in brightness due to changes in the scene topography or position of the sun in the sky. The results also show that a supervised classifier can predict class labels more accurately and more consistently across time when images are represented using the proposed method. Additionally, this thesis proposes methods to train supervised classification models to be more robust to variations in illumination where only limited amounts of labelled data are available. The transfer of knowledge from well-labelled datasets to poorly labelled datasets for classification is investigated. A method is also proposed for enabling small amounts of labelled samples to capture the variability in spectra across the scene. These samples are then used to train a classifier to be robust to the variability in the data caused by variations in illumination. The results show that these approaches make convolutional neural network classifiers more robust and achieve better performance when there is limited labelled training data. A case study is presented where a pipeline is proposed that incorporates the methods proposed in this thesis for learning robust feature representations and classification models. A scene is clustered using no labelled data. The results show that the pipeline groups the data into clusters that are consistent with the spatial distribution of the classes in the scene as determined from ground truth

    Mass spectral imaging of clinical samples using deep learning

    Get PDF
    A better interpretation of tumour heterogeneity and variability is vital for the improvement of novel diagnostic techniques and personalized cancer treatments. Tumour tissue heterogeneity is characterized by biochemical heterogeneity, which can be investigated by unsupervised metabolomics. Mass Spectrometry Imaging (MSI) combined with Machine Learning techniques have generated increasing interest as analytical and diagnostic tools for the analysis of spatial molecular patterns in tissue samples. Considering the high complexity of data produced by the application of MSI, which can consist of many thousands of spectral peaks, statistical analysis and in particular machine learning and deep learning have been investigated as novel approaches to deduce the relationships between the measured molecular patterns and the local structural and biological properties of the tissues. Machine learning have historically been divided into two main categories: Supervised and Unsupervised learning. In MSI, supervised learning methods may be used to segment tissues into histologically relevant areas e.g. the classification of tissue regions in H&E (Haemotoxylin and Eosin) stained samples. Initial classification by an expert histopathologist, through visual inspection enables the development of univariate or multivariate models, based on tissue regions that have significantly up/down-regulated ions. However, complex data may result in underdetermined models, and alternative methods that can cope with high dimensionality and noisy data are required. Here, we describe, apply, and test a novel diagnostic procedure built using a combination of MSI and deep learning with the objective of delineating and identifying biochemical differences between cancerous and non-cancerous tissue in metastatic liver cancer and epithelial ovarian cancer. The workflow investigates the robustness of single (1D) to multidimensional (3D) tumour analyses and also highlights possible biomarkers which are not accessible from classical visual analysis of the H&E images. The identification of key molecular markers may provide a deeper understanding of tumour heterogeneity and potential targets for intervention.Open Acces

    Simulated Annealing

    Get PDF
    The book contains 15 chapters presenting recent contributions of top researchers working with Simulated Annealing (SA). Although it represents a small sample of the research activity on SA, the book will certainly serve as a valuable tool for researchers interested in getting involved in this multidisciplinary field. In fact, one of the salient features is that the book is highly multidisciplinary in terms of application areas since it assembles experts from the fields of Biology, Telecommunications, Geology, Electronics and Medicine
    corecore