177 research outputs found

    Deep feature fusion via two-stream convolutional neural network for hyperspectral image classification

    Get PDF
    The representation power of convolutional neural network (CNN) models for hyperspectral image (HSI) analysis is in practice limited by the available amount of the labeled samples, which is often insufficient to sustain deep networks with many parameters. We propose a novel approach to boost the network representation power with a two-stream 2-D CNN architecture. The proposed method extracts simultaneously, the spectral features and local spatial and global spatial features, with two 2-D CNN networks and makes use of channel correlations to identify the most informative features. Moreover, we propose a layer-specific regularization and a smooth normalization fusion scheme to adaptively learn the fusion weights for the spectral-spatial features from the two parallel streams. An important asset of our model is the simultaneous training of the feature extraction, fusion, and classification processes with the same cost function. Experimental results on several hyperspectral data sets demonstrate the efficacy of the proposed method compared with the state-of-the-art methods in the field

    An Approach for the Customized High-Dimensional Segmentation of Remote Sensing Hyperspectral Images

    Get PDF
    Abstract: This paper addresses three problems in the field of hyperspectral image segmentation: the fact that the way an image must be segmented is related to what the user requires and the application; the lack and cost of appropriately labeled reference images; and, finally, the information loss problem that arises in many algorithms when high dimensional images are projected onto lower dimensional spaces before starting the segmentation process. To address these issues, the Multi-Gradient based Cellular Automaton (MGCA) structure is proposed to segment multidimensional images without projecting them to lower dimensional spaces. The MGCA structure is coupled with an evolutionary algorithm (ECAS-II) in order to produce the transition rule sets required by MGCA segmenters. These sets are customized to specific segmentation needs as a function of a set of low dimensional training images in which the user expresses his segmentation requirements. Constructing high dimensional image segmenters from low dimensional training sets alleviates the problem of lack of labeled training images. These can be generated online based on a parametrization of the desired segmentation extracted from a set of examples. The strategy has been tested in experiments carried out using synthetic and real hyperspectral images, and it has been compared to state-of-the-art segmentation approaches over benchmark images in the area of remote sensing hyperspectral imaging.Ministerio de EconomĂ­a y competitividad; TIN2015-63646-C5-1-RMinisterio de EconomĂ­a y competitividad; RTI2018-101114-B-I00Xunta de Galicia: ED431C 2017/1

    (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods

    Get PDF
    Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'

    Reconstruction algorithms for multispectral diffraction imaging

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn conventional Computed Tomography (CT) systems, a single X-ray source spectrum is used to radiate an object and the total transmitted intensity is measured to construct the spatial linear attenuation coefficient (LAC) distribution. Such scalar information is adequate for visualization of interior physical structures, but additional dimensions would be useful to characterize the nature of the structures. By imaging using broadband radiation and collecting energy-sensitive measurement information, one can generate images of additional energy-dependent properties that can be used to characterize the nature of specific areas in the object of interest. In this thesis, we explore novel imaging modalities that use broadband sources and energy-sensitive detection to generate images of energy-dependent properties of a region, with the objective of providing high quality information for material component identification. We explore two classes of imaging problems: 1) excitation using broad spectrum sub-millimeter radiation in the Terahertz regime and measure- ment of the diffracted Terahertz (THz) field to construct the spatial distribution of complex refractive index at multiple frequencies; 2) excitation using broad spectrum X-ray sources and measurement of coherent scatter radiation to image the spatial distribution of coherent-scatter form factors. For these modalities, we extend approaches developed for multimodal imaging and propose new reconstruction algorithms that impose regularization structure such as common object boundaries across reconstructed regions at different frequencies. We also explore reconstruction techniques that incorporate prior knowledge in the form of spectral parametrization, sparse representations over redundant dictionaries and explore the advantage and disadvantages of these techniques in terms of image quality and potential for accurate material characterization. We use the proposed reconstruction techniques to explore alternative architectures with reduced scanning time and increased signal-to-noise ratio, including THz diffraction tomography, limited angle X-ray diffraction tomography and the use of coded aperture masks. Numerical experiments and Monte Carlo simulations were conducted to compare performances of the developed methods, and validate the studied architectures as viable options for imaging of energy-dependent properties

    Automated Remote Sensing Image Interpretation with Limited Labeled Training Data

    Get PDF
    Automated remote sensing image interpretation has been investigated for more than a decade. In early years, most work was based on the assumption that there are sufficient labeled samples to be used for training. However, ground-truth collection is a very tedious and time-consuming task and sometimes very expensive, especially in the field of remote sensing that usually relies on field surveys to collect ground truth. In recent years, as the development of advanced machine learning techniques, remote sensing image interpretation with limited ground-truth has caught the attention of researchers in the fields of both remote sensing and computer science. Three approaches that focus on different aspects of the interpretation process, i.e., feature extraction, classification, and segmentation, are proposed to deal with the limited ground truth problem. First, feature extraction techniques, which usually serve as a pre-processing step for remote sensing image classification are explored. Instead of only focusing on feature extraction, a joint feature extraction and classification framework is proposed based on ensemble local manifold learning. Second, classifiers in the case of limited labeled training data are investigated, and an enhanced ensemble learning method that outperforms state-of-the-art classification methods is proposed. Third, image segmentation techniques are investigated, with the aid of unlabeled samples and spatial information. A semi-supervised self-training method is proposed, which is capable of expanding the number of training samples by its own and hence improving classification performance iteratively. Experiments show that the proposed approaches outperform state-of-the-art techniques in terms of classification accuracy on benchmark remote sensing datasets.4 month

    Unsupervised Spectral-Spatial Feature Learning via Deep Residual Conv-Deconv Network for Hyperspectral Image Classification

    Get PDF
    Supervised approaches classify input data using a set of representative samples for each class, known as Training samples. The collection of such samples is expensive and time demanding. Hence, unsupervised feature learning, which has a quick access to arbitrary amounts of unlabeled data, is conceptually of high interest. In this paper, we propose a novel network architecture, fully Conv–Deconv network, for unsupervised spectral–spatial feature learning of hyperspectral images, which is able to be trained in an end-to-end manner. Specifically, our network is based on the so-called encoder– decoder paradigm, i.e., the input 3-D hyperspectral patch is first transformed into a typically lower dimensional space via a convolutional subnetwork (encoder), and then expanded to reproduce the initial data by a deconvolutional subnetwork (decoder). However, during the experiment, we found that such a network is not easy to be optimized. To address this problem, we refine the proposed network architecture by incorporating: 1) residual learning and 2) a new unpooling operation that can use memorized max-pooling indexes. Moreover, to understand the “black box,” we make an in-depth study of the learned Feature maps in the experimental analysis. A very interesting discovery is that some specific “neurons” in the first residual block of the proposed network own good description power for semantic visual patterns in the object level, which provide an opportunity to achieve “free” object detection. This paper, for the first time in the remote sensing community, proposes an end-to-end fully Conv–Deconv network for unsupervised spectral–spatial feature learning. Moreover, this paper also introduces an in-depth investigation of learned features. Experimental results on two widely used hyperspectral data, Indian Pines and Pavia University, demonstrate competitive performance obtained by the proposed methodology compared with other studied approaches

    Rich probabilistic models for semantic labeling

    Get PDF
    Das Ziel dieser Monographie ist es die Methoden und Anwendungen des semantischen Labelings zu erforschen. Unsere Beiträge zu diesem sich rasch entwickelten Thema sind bestimmte Aspekte der Modellierung und der Inferenz in probabilistischen Modellen und ihre Anwendungen in den interdisziplinären Bereichen der Computer Vision sowie medizinischer Bildverarbeitung und Fernerkundung

    Hyperspectral-Augmented Target Tracking

    Get PDF
    With the global war on terrorism, the nature of military warfare has changed significantly. The United States Air Force is at the forefront of research and development in the field of intelligence, surveillance, and reconnaissance that provides American forces on the ground and in the air with the capability to seek, monitor, and destroy mobile terrorist targets in hostile territory. One such capability recognizes and persistently tracks multiple moving vehicles in complex, highly ambiguous urban environments. The thesis investigates the feasibility of augmenting a multiple-target tracking system with hyperspectral imagery. The research effort evaluates hyperspectral data classification using fuzzy c-means and the self-organizing map clustering algorithms for remote identification of moving vehicles. Results demonstrate a resounding 29.33% gain in performance from the baseline kinematic-only tracking to the hyperspectral-augmented tracking. Through a novel methodology, the hyperspectral observations are integrated in the MTT paradigm. Furthermore, several novel ideas are developed and implemented—spectral gating of hyperspectral observations, a cost function for hyperspectral observation-to-track association, and a self-organizing map filtering method. It appears that relatively little work in the target tracking and hyperspectral image classification literature exists that addresses these areas. Finally, two hyperspectral sensor modes are evaluated—Pushbroom and Region-of-Interest. Both modes are based on realistic technologies, and investigating their performance is the goal of performance-driven sensing. Performance comparison of the two modes can drive future design of hyperspectral sensors

    Compressive sensing for signal ensembles

    Get PDF
    Compressive sensing (CS) is a new approach to simultaneous sensing and compression that enables a potentially large reduction in the sampling and computation costs for acquisition of signals having a sparse or compressible representation in some basis. The CS literature has focused almost exclusively on problems involving single signals in one or two dimensions. However, many important applications involve distributed networks or arrays of sensors. In other applications, the signal is inherently multidimensional and sensed progressively along a subset of its dimensions; examples include hyperspectral imaging and video acquisition. Initial work proposed joint sparsity models for signal ensembles that exploit both intra- and inter-signal correlation structures. Joint sparsity models enable a reduction in the total number of compressive measurements required by CS through the use of specially tailored recovery algorithms. This thesis reviews several different models for sparsity and compressibility of signal ensembles and multidimensional signals and proposes practical CS measurement schemes for these settings. For joint sparsity models, we evaluate the minimum number of measurements required under a recovery algorithm with combinatorial complexity. We also propose a framework for CS that uses a union-of-subspaces signal model. This framework leverages the structure present in certain sparse signals and can exploit both intra- and inter-signal correlations in signal ensembles. We formulate signal recovery algorithms that employ these new models to enable a reduction in the number of measurements required. Additionally, we propose the use of Kronecker product matrices as sparsity or compressibility bases for signal ensembles and multidimensional signals to jointly model all types of correlation present in the signal when each type of correlation can be expressed using sparsity. We compare the performance of standard global measurement ensembles, which act on all of the signal samples; partitioned measurements, which act on a partition of the signal with a given measurement depending only on a piece of the signal; and Kronecker product measurements, which can be implemented in distributed measurement settings. The Kronecker product formulation in the sparsity and measurement settings enables the derivation of analytical bounds for transform coding compression of signal ensembles and multidimensional signals. We also provide new theoretical results for performance of CS recovery when Kronecker product matrices are used, which in turn motivates new design criteria for distributed CS measurement schemes
    • …
    corecore