12 research outputs found

    Dual-Window Superpixel Data Augmentation for Hyperspectral Image Classification

    Get PDF
    Deep learning (DL) has been shown to obtain superior results for classification tasks in the field of remote sensing hyperspectral imaging. Superpixel-based techniques can be applied to DL, significantly decreasing training and prediction times, but the results are usually far from satisfactory due to overfitting. Data augmentation techniques alleviate the problem by synthetically generating new samples from an existing dataset in order to improve the generalization capabilities of the classification model. In this paper we propose a novel data augmentation framework in the context of superpixel-based DL called dual-window superpixel (DWS). With DWS, data augmentation is performed over patches centered on the superpixels obtained by the application of simple linear iterative clustering (SLIC) superpixel segmentation. DWS is based on dividing the input patches extracted from the superpixels into two regions and independently applying transformations over them. As a result, four different data augmentation techniques are proposed that can be applied to a superpixel-based CNN classification scheme. An extensive comparison in terms of classification accuracy with other data augmentation techniques from the literature using two datasets is also shown. One of the datasets consists of small hyperspectral small scenes commonly found in the literature. The other consists of large multispectral vegetation scenes of river basins. The experimental results show that the proposed approach increases the overall classification accuracy for the selected datasets. In particular, two of the data augmentation techniques introduced, namely, dual-flip and dual-rotate, obtained the best resultsThe images of the Galicia dataset were obtained in partnership with the Babcock company, supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia. This work was supported in part by Ministerio de Ciencia e Innovación, Government of Spain (grant numbers PID2019-104834GB-I00 and BES-2017-080920), and Consellería de Educación, Universidade e Formación Profesional (grant number ED431C 2018/19, and accreditation 2019–2022 ED431G-2019/04). All are co-funded by the European Regional Development Fund (ERDF)S

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    Classification of Compact Polarimetric Synthetic Aperture Radar Images

    Get PDF
    The RADARSAT Constellation Mission (RCM) was launched in June 2019. RCM, in addition to dual-polarization (DP) and fully quad-polarimetric (QP) imaging modes, provides compact polarimetric (CP) mode data. A CP synthetic aperture radar (SAR) is a coherent DP system in which a single circular polarization is transmitted followed by the reception in two orthogonal linear polarizations. A CP SAR fully characterizes the backscattered field using the Stokes parameters, or equivalently, the complex coherence matrix. This is the main advantage of a CP SAR over the traditional (non-coherent) DP SAR. Therefore, designing scene segmentation and classification methods using CP complex coherence matrix data is advocated in this thesis. Scene classification of remotely captured images is an important task in monitoring the Earth's surface. The high-resolution RCM CP SAR data can be used for land cover classification as well as sea-ice mapping. Mapping sea ice formed in ocean bodies is important for ship navigation and climate change modeling. The Canadian Ice Service (CIS) has expert ice analysts who manually generate sea-ice maps of Arctic areas on a daily basis. An automated sea-ice mapping process that can provide detailed yet reliable maps of ice types and water is desirable for CIS. In addition to linear DP SAR data in ScanSAR mode (500km), RCM wide-swath CP data (350km) can also be used in operational sea-ice mapping of the vast expanses in the Arctic areas. The smaller swath coverage of QP SAR data (50km) is the reason why the use of QP SAR data is limited for sea-ice mapping. This thesis involves the design and development of CP classification methods that consist of two steps: an unsupervised segmentation of CP data to identify homogeneous regions (superpixels) and a labeling step where a ground truth label is assigned to each super-pixel. An unsupervised segmentation algorithm is developed based on the existing Iterative Region Growing using Semantics (IRGS) for CP data and is called CP-IRGS. The constituents of feature model and spatial context model energy terms in CP-IRGS are developed based on the statistical properties of CP complex coherence matrix data. The superpixels generated by CP-IRGS are then used in a graph-based labeling method that incorporates the global spatial correlation among super-pixels in CP data. The classifications of sea-ice and land cover types using test scenes indicate that (a) CP scenes provide improved sea-ice classification than the linear DP scenes, (b) CP-IRGS performs more accurate segmentation than that using only CP channel intensity images, and (c) using global spatial information (provided by a graph-based labeling approach) provides an improvement in classification accuracy values over methods that do not exploit global spatial correlation

    Reducing the Burden of Aerial Image Labelling Through Human-in-the-Loop Machine Learning Methods

    Get PDF
    This dissertation presents an introduction to human-in-the-loop deep learning methods for remote sensing applications. It is motivated by the need to decrease the time spent by volunteers on semantic segmentation of remote sensing imagery. We look at two human-in-the-loop approaches of speeding up the labelling of the remote sensing data: interactive segmentation and active learning. We develop these methods specifically in response to the needs of the disaster relief organisations who require accurately labelled maps of disaster-stricken regions quickly, in order to respond to the needs of the affected communities. To begin, we survey the current approaches used within the field. We analyse the shortcomings of these models which include outputs ill-suited for uploading to mapping databases, and an inability to label new regions well, when the new regions differ from the regions trained on. The methods developed then look at addressing these shortcomings. We first develop an interactive segmentation algorithm. Interactive segmentation aims to segment objects with a supervisory signal from a user to assist the model. Work within interactive segmentation has focused largely on segmenting one or few objects within an image. We make a few adaptions to allow an existing method to scale to remote sensing applications where there are tens of objects within a single image that needs to be segmented. We show a quantitative improvements of up to 18% in mean intersection over union, as well as qualitative improvements. The algorithm works well when labelling new regions, and the qualitative improvements show outputs more suitable for uploading to mapping databases. We then investigate active learning in the context of remote sensing. Active learning looks at reducing the number of labelled samples required by a model to achieve an acceptable performance level. Within the context of deep learning, the utility of the various active learning strategies developed is uncertain, with conflicting results within the literature. We evaluate and compare a variety of sample acquisition strategies on the semantic segmentation tasks in scenarios relevant to disaster relief mapping. Our results show that all active learning strategies evaluated provide minimal performance increases over a simple random sample acquisition strategy. However, we present analysis of the results illustrating how the various strategies work and intuition of when certain active learning strategies might be preferred. This analysis could be used to inform future research. We conclude by providing examples of the synergies of these two approaches, and indicate how this work, on reducing the burden of aerial image labelling for the disaster relief mapping community, can be further extended

    Locality sensitive modelling approach for object detection, tracking and segmentation in biomedical images

    Get PDF
    Biomedical imaging techniques play an important role in visualisation of e.g., biological structures, tissues, diseases and medical conditions in cellular level. The techniques bring us enormous image datasets for studying biological processes, clinical diagnosis and medical analysis. Thanks to recent advances in computer technology and hardware, automatic analysis of biomedical images becomes more feasible and popular. Although computer scientists have made a great effort in developing advanced imaging processing algorithms, many problems regarding object analysis still remain unsolved due to the diversity of biomedical imaging. In this thesis, we focus on developing object analysis solutions for two entirely different biomedical image types: uorescence microscopy sequences and endometrial histology images. In uorescence microscopy, our task is to track massive uorescent spots with similar appearances and complicated motion pattern in noisy environments over hundreds of frames. In endometrial histology, we are challenged by detecting different types of cells with similar appearance and in terms of colour and morphology. The proposed solutions utilise several novel locality sensitive models which can extract spatial or/and temporal relational features of the objects, i.e., local neighbouring objects exhibiting certain structures or patterns, for overcoming the difficulties of object analysis in uorescence microscopy and endometrial histology

    Video Classification System Based on Representation Techniques Using Kernel Methods and Bayesian Inference

    Get PDF
    En este trabajo propusimos diferentes estrategias de representación de características para el procesamiento de video. Nuestro principal objetivo es revelar patrones discriminantes de datos de video para mejorar la tarea de visión por computadora, reconocimiento de acción humana. Con este fin, propusimos utilizar un análisis de relevancia basado en un núcleo para reconocer los descriptores mas relevantes relacionados con el conocimiento de acciones. Además, la propuesta permite calcular una matriz de proyección lineal para mapear muestras de video en un nuevo espacio, donde se preserva la separabilidad de clases y se reduce la dimensionalidad de la representación

    Deep Vision in Optical Imagery: From Perception to Reasoning

    Get PDF
    Deep learning has achieved extraordinary success in a wide range of tasks in computer vision field over the past years. Remote sensing data present different properties as compared to natural images/videos, due to their unique imaging technique, shooting angle, etc. For instance, hyperspectral images usually have hundreds of spectral bands, offering additional information, and the size of objects (e.g., vehicles) in remote sensing images is quite limited, which brings challenges for detection or segmentation tasks. This thesis focuses on two kinds of remote sensing data, namely hyper/multi-spectral and high-resolution images, and explores several methods to try to find answers to the following questions: - In comparison with natural images or videos in computer vision, the unique asset of hyper/multi-spectral data is their rich spectral information. But what this “additional” information brings for learning a network? And how do we take full advantage of these spectral bands? - Remote sensing images at high resolution have pretty different characteristics, bringing challenges for several tasks, for example, small object segmentation. Can we devise tailored networks for such tasks? - Deep networks have produced stunning results in a variety of perception tasks, e.g., image classification, object detection, and semantic segmentation. While the capacity to reason about relations over space is vital for intelligent species. Can a network/module with the capacity of reasoning benefit to parsing remote sensing data? To this end, a couple of networks are devised to figure out what a network learns from hyperspectral images and how to efficiently use spectral bands. In addition, a multi-task learning network is investigated for the instance segmentation of vehicles from aerial images and videos. Finally, relational reasoning modules are designed to improve semantic segmentation of aerial images
    corecore