142 research outputs found

    Neural Architecture Search for Image Segmentation and Classification

    Get PDF
    Deep learning (DL) is a class of machine learning algorithms that relies on deep neural networks (DNNs) for computations. Unlike traditional machine learning algorithms, DL can learn from raw data directly and effectively. Hence, DL has been successfully applied to tackle many real-world problems. When applying DL to a given problem, the primary task is designing the optimum DNN. This task relies heavily on human expertise, is time-consuming, and requires many trial-and-error experiments. This thesis aims to automate the laborious task of designing the optimum DNN by exploring the neural architecture search (NAS) approach. Here, we propose two new NAS algorithms for two real-world problems: pedestrian lane detection for assistive navigation and hyperspectral image segmentation for biosecurity scanning. Additionally, we also introduce a new dataset-agnostic predictor of neural network performance, which can be used to speed-up NAS algorithms that require the evaluation of candidate DNNs

    Deep learning-based change detection in remote sensing images:a review

    Get PDF
    Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Training Methods of Multi-label Prediction Classifiers for Hyperspectral Remote Sensing Images

    Full text link
    With their combined spectral depth and geometric resolution, hyperspectral remote sensing images embed a wealth of complex, non-linear information that challenges traditional computer vision techniques. Yet, deep learning methods known for their representation learning capabilities prove more suitable for handling such complexities. Unlike applications that focus on single-label, pixel-level classification methods for hyperspectral remote sensing images, we propose a multi-label, patch-level classification method based on a two-component deep-learning network. We use patches of reduced spatial dimension and a complete spectral depth extracted from the remote sensing images. Additionally, we investigate three training schemes for our network: Iterative, Joint, and Cascade. Experiments suggest that the Joint scheme is the best-performing scheme; however, its application requires an expensive search for the best weight combination of the loss constituents. The Iterative scheme enables the sharing of features between the two parts of the network at the early stages of training. It performs better on complex data with multi-labels. Further experiments showed that methods designed with different architectures performed well when trained on patches extracted and labeled according to our sampling method.Comment: 1- Added references. 2- updated methodology figure and added new figures to visualise the different training schemes and 3- Correcting typos 4- Revised introduction, no change in results or discussio

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Virtual Cleaning of Works of Art Using Deep Learning Based Approaches

    Get PDF
    Virtual cleaning of art is a key process that conservators apply to see the likely appearance of the work of art they have aimed to clean, before the process of cleaning. There have been many different approaches to virtually clean artworks but having to physically clean the artwork at a few specific places of specific colors, the need to have pure black and white paint on the painting and their low accuracy are only a few of their shortcomings prompting us to propose deep learning based approaches in this research. First we report the work we have done in this field focusing on the color estimation of the artwork virtual cleaning and then we describe our methods for the spectral reflectance estimation of artwork in virtual cleaning. In the color estimation part, a deep convolutional neural network (CNN) and a deep generative network (DGN) are suggested, which estimate the RGB image of the cleaned artwork from an RGB image of the uncleaned artwork. Applying the networks to the images of the well-known artworks (such as the Mona Lisa and The Virgin and Child with Saint Anne) and Macbeth ColorChecker and comparing the results to the only physics-based model (which is the first model that has approached the issue of virtual cleaning from the physics-point of view, hence our reference to compare our models with) shows that our methods outperform that model and have great potentials of being applied to the real situations in which there might not be much information available on the painting, and all we have is an RGB image of the uncleaned artwork. Nonetheless, the methods proposed in the first part, cannot provide us with the spectral reflectance information of the artwork, therefore, the second part of the dissertation is proposed. This part focuses on the spectral estimation of the artwork virtual cleaning. Two deep learning-based approaches are also proposed here; the first one is deep generative network. This method receives a cube of the hyperspectral image of the uncleaned artwork and tries to output another cube which is the virtually cleaned hyperspectral image of the artwork. The second approach is 1D Convolutional Autoencoder (1DCA), which is based on 1D convolutional neural network and tries to find the spectra of the virtually cleaned artwork using the spectra of the physically cleaned artworks and their corresponding uncleaned spectra. The approaches are applied to hyperspectral images of Macbeth ColorChecker (simulated in the forms of cleaned and uncleaned hyperspectral images) and the \u27Haymakers\u27 (real hyperspectral images of both cleaned and uncleaned states). The results, in terms of Euclidean distance and spectral angle between the virtually cleaned artwork and the physically cleaned one, show that the proposed approaches have outperformed the physics-based model, with DGN outperforming the 1DCA. Methods proposed herein do not rely on finding a specific type of paint and color on the painting first and take advantage of the high accuracy offered by deep learning-based approaches and they are also applicable to other paintings

    A Linear Combination of Heuristics Approach to Spatial Sampling Hyperspectral Data for Target Tracking

    Get PDF
    Persistent surveillance of the battlespace results in better battlespace awareness which aids in obtaining air superiority, winning battles, and saving friendly lives. Although hyperspectral imagery (HSI) data has proven useful for discriminating targets, it presents many challenges as a useful tool in persistent surveillance. A new sensor under development has the potential of overcoming these challenges and transforming our persistent surveillance capability by providing HSI data for a limited number of pixels and grayscale video for the remainder. The challenge of exploiting this new sensor is determining where the HSI data in the sensor\u27s field of view will be the most useful. The approach taken is to use a utility function with components of equal dispersion, periodic poling, missed measurements, and predictive probability of association error (PPAE). The relative importance or optimal weighting of the different types of TOI is accomplished by a genetic algorithm using a multi-objective problem formulation. Experiments show using the utility function with equal weighting results in superior target tracking compared to any individual component by itself, and the equal weighting in close to the optimal solution. The new sensor is successfully exploited resulting in improved persistent surveillance
    • …
    corecore