584 research outputs found

    Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution

    Full text link
    In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018, Spotlight

    Advances in Hyperspectral Image Classification Methods for Vegetation and Agricultural Cropland Studies

    Get PDF
    Hyperspectral data are becoming more widely available via sensors on airborne and unmanned aerial vehicle (UAV) platforms, as well as proximal platforms. While space-based hyperspectral data continue to be limited in availability, multiple spaceborne Earth-observing missions on traditional platforms are scheduled for launch, and companies are experimenting with small satellites for constellations to observe the Earth, as well as for planetary missions. Land cover mapping via classification is one of the most important applications of hyperspectral remote sensing and will increase in significance as time series of imagery are more readily available. However, while the narrow bands of hyperspectral data provide new opportunities for chemistry-based modeling and mapping, challenges remain. Hyperspectral data are high dimensional, and many bands are highly correlated or irrelevant for a given classification problem. For supervised classification methods, the quantity of training data is typically limited relative to the dimension of the input space. The resulting Hughes phenomenon, often referred to as the curse of dimensionality, increases potential for unstable parameter estimates, overfitting, and poor generalization of classifiers. This is particularly problematic for parametric approaches such as Gaussian maximum likelihoodbased classifiers that have been the backbone of pixel-based multispectral classification methods. This issue has motivated investigation of alternatives, including regularization of the class covariance matrices, ensembles of weak classifiers, development of feature selection and extraction methods, adoption of nonparametric classifiers, and exploration of methods to exploit unlabeled samples via semi-supervised and active learning. Data sets are also quite large, motivating computationally efficient algorithms and implementations. This chapter provides an overview of the recent advances in classification methods for mapping vegetation using hyperspectral data. Three data sets that are used in the hyperspectral classification literature (e.g., Botswana Hyperion satellite data and AVIRIS airborne data over both Kennedy Space Center and Indian Pines) are described in Section 3.2 and used to illustrate methods described in the chapter. An additional high-resolution hyperspectral data set acquired by a SpecTIR sensor on an airborne platform over the Indian Pines area is included to exemplify the use of new deep learning approaches, and a multiplatform example of airborne hyperspectral data is provided to demonstrate transfer learning in hyperspectral image classification. Classical approaches for supervised and unsupervised feature selection and extraction are reviewed in Section 3.3. In particular, nonlinearities exhibited in hyperspectral imagery have motivated development of nonlinear feature extraction methods in manifold learning, which are outlined in Section 3.3.1.4. Spatial context is also important in classification of both natural vegetation with complex textural patterns and large agricultural fields with significant local variability within fields. Approaches to exploit spatial features at both the pixel level (e.g., co-occurrencebased texture and extended morphological attribute profiles [EMAPs]) and integration of segmentation approaches (e.g., HSeg) are discussed in this context in Section 3.3.2. Recently, classification methods that leverage nonparametric methods originating in the machine learning community have grown in popularity. An overview of both widely used and newly emerging approaches, including support vector machines (SVMs), Gaussian mixture models, and deep learning based on convolutional neural networks is provided in Section 3.4. Strategies to exploit unlabeled samples, including active learning and metric learning, which combine feature extraction and augmentation of the pool of training samples in an active learning framework, are outlined in Section 3.5. Integration of image segmentation with classification to accommodate spatial coherence typically observed in vegetation is also explored, including as an integrated active learning system. Exploitation of multisensor strategies for augmenting the pool of training samples is investigated via a transfer learning framework in Section 3.5.1.2. Finally, we look to the future, considering opportunities soon to be provided by new paradigms, as hyperspectral sensing is becoming common at multiple scales from ground-based and airborne autonomous vehicles to manned aircraft and space-based platforms

    Bayesian Fusion of Multi-Band Images

    Get PDF
    International audienceThis paper presents a Bayesian fusion technique for remotely sensed multi-band images. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical considerations is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced within a Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification

    Full text link
    This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial feature from hyperspectral images (HSIs). In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. Besides, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with several state-of-the-art methods, including the CNN framework, on three widely used HSIs. The obtained results show that Bi-CLSTM can improve the classification performance as compared to other methods

    Combining hyperspectral UAV and mulitspectral FORMOSAT-2 imagery for precision agriculture applications

    Get PDF
    Precision agriculture requires detailed information regarding the crop status variability within a field. Remote sensing provides an efficient way to obtain such information through observing biophysical parameters, such as canopy nitrogen content, leaf coverage, and plant biomass. However, individual remote sensing sensors often fail to provide information which meets the spatial and temporal resolution required by precision agriculture. The purpose of this study is to investigate methods which can be used to combine imagery from various sensors in order to create a new dataset which comes closer to meeting these requirements. More specifically, this study combined multispectral satellite imagery (Formosat-2) and hyperspectral Unmanned Aerial Vehicle (UAV) imagery of a potato field in the Netherlands. The imagery from both platforms was combined in two ways. Firstly, data fusion methods brought the spatial resolution of the Formosat-2 imagery (8 m) down to the spatial resolution of the UAV imagery (1 m). Two data fusion methods were applied: an unmixing-based algorithm and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The unmixing-based method produced vegetation indices which were highly correlated to the measured LAI (rs= 0.866) and canopy chlorophyll values (rs=0.884), whereas the STARFM obtained lower correlations. Secondly, a Spectral-Temporal Reflectance Surface (STRS) was constructed to interpolate a daily 101 band reflectance spectra using both sources of imagery. A novel STRS method was presented, which utilizes Bayesian theory to obtain realistic spectra and accounts for sensor uncertainties. The resulting surface obtained a high correlation to LAI (rs=0.858) and canopy chlorophyll (rs=0.788) measurements at field level. The multi-sensor datasets were able to characterize significant differences of crop status due to differing nitrogen fertilization regimes from June to August. Meanwhile, the yield prediction models based purely on the vegetation indices extracted from the unmixing-based fusion dataset explained 52.7% of the yield variation, whereas the STRS dataset was able to explain 72.9% of the yield variability. The results of the current study indicate that the limitations of each individual sensor can be largely surpassed by combining multiple sources of imagery, which is beneficial for agricultural management. Further research could focus on the integration of data fusion and STRS techniques, and the inclusion of imagery from additional sensors.Samenvatting In een wereld waar toekomstige voedselzekerheid bedreigd wordt, biedt precisielandbouw een oplossing die de oogst kan maximaliseren terwijl de economische en ecologische kosten van voedselproductie beperkt worden. Om dit te kunnen doen is gedetailleerde informatie over de staat van het gewas nodig. Remote sensing is een manier om biofysische informatie, waaronder stikstof gehaltes en biomassa, te verkrijgen. De informatie van een individuele sensor is echter vaak niet genoeg om aan de hoge eisen betreft ruimtelijke en temporele resolutie te voldoen. Deze studie combineert daarom de informatie afkomstig van verschillende sensoren, namelijk multispectrale satelliet beelden (Formosat-2) en hyperspectral Unmanned Aerial Vehicle (UAV) beelden van een aardappel veld, in een poging om aan de hoge informatie eisen van precisielandbouw te voldoen. Ten eerste werd gebruik gemaakt van datafusie om de acht Formosat-2 beelden met een resolutie van 8 m te combineren met de vier UAV beelden met een resolutie van 1 m. De resulterende dataset bestaat uit acht beelden met een resolutie van 1 m. Twee methodes werden toegepast, de zogenaamde STARFM methode en een unmixing-based methode. De unmixing-based methode produceerde beelden met een hoge correlatie op de Leaf Area Index (LAI) (rs= 0.866) en chlorofyl gehalte (rs=0.884) gemeten op veldnieveau. De STARFM methode presteerde slechter, met correlaties van respectievelijk rs=0.477 en rs=0.431. Ten tweede werden Spectral-Temporal Reflectance Surfaces (STRSs) ontwikkeld die een dagelijks spectrum weergeven met 101 spectrale banden. Om dit te doen is een nieuwe STRS methode gebaseerd op de Bayesiaanse theorie ontwikkeld. Deze produceert realistische spectra met een overeenkomstige onzekerheid. Deze STRSs vertoonden hoge correlaties met de LAI (rs=0.858) en het chlorofyl gehalte (rs=0.788) gemeten op veldnieveau. De bruikbaarheid van deze twee soorten datasets werd geanalyseerd door middel van de berekening van een aantal vegetatie-indexen. De resultaten tonen dat de multi-sensor datasets capabel zijn om significante verschillen in de groei van gewassen vast te stellen tijdens het groeiseizoen zelf. Bovendien werden regressiemodellen toegepast om de bruikbaarheid van de datasets voor oogst voorspellingen. De unmixing-based datafusie verklaarde 52.7% van de variatie in oogst, terwijl de STRS 72.9% van de variabiliteit verklaarden. De resultaten van het huidige onderzoek tonen aan dat de beperkingen van een individuele sensor grotendeels overtroffen kunnen worden door het gebruik van meerdere sensoren. Het combineren van verschillende sensoren, of het nu Formosat-2 en UAV beelden zijn of andere ruimtelijke informatiebronnen, kan de hoge informatie eisen van de precisielandbouw tegemoet komen.In the context of threatened global food security, precision agriculture is one strategy to maximize yield to meet the increased demands of food, while minimizing both economic and environmental costs of food production. This is done by applying variable management strategies, which means the fertilizer or irrigation rates within a field are adjusted according to the crop needs in that specific part of the field. This implies that accurate crop status information must be available regularly for many different points in the field. Remote sensing can provide this information, but it is difficult to meet the information requirements when using only one sensor. For example, satellites collect imagery regularly and over large areas, but may be blocked by clouds. Unmanned Aerial Vehicles (UAVs), commonly known as drones, are more flexible but have higher operational costs. The purpose of this study was to use fusion methods to combine satellite (Formosat-2) with UAV imagery of a potato field in the Netherlands. Firstly, data fusion was applied. The eight Formosat-2 images with 8 m x 8 m pixels were combined with four UAV images with 1 m x 1 m pixels to obtain a new dataset of eight images with 1 m x 1 m pixels. Unmixing-based data fusion produced images which had a high correlation to field measurements obtained from the potato field during the growing season. The results of a second data fusion method, STARFM, were less reliable in this study. The UAV images were hyperspectral, meaning they contained very detailed information spanning a large part of the electromagnetic spectrum. Much of this information was lost in the data fusion methods because the Formosat-2 images were multispectral, representing a more limited portion of the spectrum. Therefore, a second analysis investigated the use of Spectral-Temporal Reflectance Surfaces (STRS), which allow information from different portions of the electromagnetic spectrum to be combined. These STRS provided daily hyperspectral observations, which were also verified as accurate by comparing them to reference data. Finally, this study demonstrated the ability of both data fusion and STRS to identify which parts of the potato field had lower photosynthetic production during the growing season. Data fusion was capable of explaining 52.7% of the yield variation through regression models, whereas the STRS explained 72.9%. To conclude, this study indicates how to combine crop status information from different sensors to support precision agriculture management decisions
    • …
    corecore