146 research outputs found
Bayesian gravitation based classification for hyperspectral images.
Integration of spectral and spatial information is extremely important for the classification of high-resolution hyperspectral images (HSIs). Gravitation describes interaction among celestial bodies which can be applied to measure similarity between data for image classification. However, gravitation is hard to combine with spatial information and rarely been applied in HSI classification. This paper proposes a Bayesian Gravitation based Classification (BGC) to integrate the spectral and spatial information of local neighbors and training samples. In the BGC method, each testing pixel is first assumed as a massive object with unit volume and a particular density, where the density is taken as the data mass in BGC. Specifically, the data mass is formulated as an exponential function of the spectral distribution of its neighbors and the spatial prior distribution of its surrounding training samples based on the Bayesian theorem. Then, a joint data gravitation model is developed as the classification measure, in which the data mass is taken to weigh the contribution of different neighbors in a local region. Four benchmark HSI datasets, i.e. the Indian Pines, Pavia University, Salinas, and Grss_dfc_2014, are tested to verify the BGC method. The experimental results are compared with that of several well-known HSI classification methods, including the support vector machines, sparse representation, and other eight state-of-the-art HSI classification methods. The BGC shows apparent superiority in the classification of high-resolution HSIs and also flexibility for HSIs with limited samples
Gravitation-Based Edge Detection in Hyperspectral Images
Edge detection is one of the key issues in the field of computer vision and remote sensing image analysis. Although many different edge-detection methods have been proposed for gray-scale, color, and multispectral images, they still face difficulties when extracting edge features from hyperspectral images (HSIs) that contain a large number of bands with very narrow gap in the spectral domain. Inspired by the clustering characteristic of the gravitational theory, a novel edge-detection algorithm for HSIs is presented in this paper. In the proposed method, we first construct a joint feature space by combining the spatial and spectral features. Each pixel of HSI is assumed to be a celestial object in the joint feature space, which exerts gravitational force to each of its neighboring pixel. Accordingly, each object travels in the joint feature space until it reaches a stable equilibrium. At the equilibrium, the image is smoothed and the edges are enhanced, where the edge pixels can be easily distinguished by calculating the gravitational potential energy. The proposed edge-detection method is tested on several benchmark HSIs and the obtained results were compared with those of four state-of-the-art approaches. The experimental results confirm the efficacy of the proposed method
Superpixel based feature specific sparse representation for spectral-spatial classification of hyperspectral images.
To improve the performance of the sparse representation classification (SRC), we propose a superpixel-based feature specific sparse representation framework (SPFS-SRC) for spectral-spatial classification of hyperspectral images (HSI) at superpixel level. First, the HSI is divided into different spatial regions, each region is shape- and size-adapted and considered as a superpixel. For each superpixel, it contains a number of pixels with similar spectral characteristic. Since the utilization of multiple features in HSI classification has been proved to be an effective strategy, we have generated both spatial and spectral features for each superpixel. By assuming that all the pixels in a superpixel belongs to one certain class, a kernel SRC is introduced to the classification of HSI. In the SRC framework, we have employed a metric learning strategy to exploit the commonalities of different features. Experimental results on two popular HSI datasets have demonstrated the efficacy of our proposed methodology
Adaptive distance-based band hierarchy (ADBH) for effective hyperspectral band selection.
Band selection has become a significant issue for the efficiency of the hyperspectral image (HSI) processing. Although many unsupervised band selection (UBS) approaches have been developed in the last decades, a flexible and robust method is still lacking. The lack of proper understanding of the HSI data structure has resulted in the inconsistency in the outcome of UBS. Besides, most of the UBS methods are either relying on complicated measurements or rather noise sensitive, which hinder the efficiency of the determined band subset. In this article, an adaptive distance-based band hierarchy (ADBH) clustering framework is proposed for UBS in HSI, which can help to avoid the noisy bands while reflecting the hierarchical data structure of HSI. With a tree hierarchy-based framework, we can acquire any number of band subset. By introducing a novel adaptive distance into the hierarchy, the similarity between bands and band groups can be computed straightforward while reducing the effect of noisy bands. Experiments on four datasets acquired from two HSI systems have fully validated the superiority of the proposed framework
SR-POD : sample rotation based on principal-axis orientation distribution for data augmentation in deep object detection
Convolutional neural networks (CNNs) have outperformed most state-of-the-art methods in object detection. However, CNNs suffer the difficulty of detecting objects with rotation, because the dataset used to train the CCNs often does not contain sufficient samples with various angles of orientation. In this paper, we propose a novel data-augmentation approach to handle samples with rotation, which utilizes the distribution of the object's orientation without the time-consuming process of rotating the sample images. Firstly, we present an orientation descriptor, named as "principal-axis orientation" to describe the orientation of the object's principal axis in an image and estimate the distribution of objects’ principal-axis orientations (PODs) of the whole dataset. Secondly, we define a similarity metric to calculate the POD similarity between the training set and an additional dataset, which is built by randomly selecting images from the benchmark ImageNet ILSVRC2012 dataset. Finally, we optimize a cost function to obtain an optimal rotation angle, which indicates the highest POD similarity between the two aforementioned data sets. In order to evaluate our data augmentation method for object detection, experiments, conducted on the benchmark PASCAL VOC2007 dataset, show that with the training set augmented using our method, the average precision (AP) of the Faster RCNN in the TV-monitor is improved by 7.5%. In addition, our experimental results also demonstrate that new samples generated by random rotation are more likely to result in poor performance of object detection
Recommended from our members
Using Remote Sensing to Characterize Disturbance during a Severe Drought in the Sierra Nevada
Between 2012 and 2016, California experienced an extreme period of drought and high temperatures. During this period there were two particularly notable disturbances in the southern Sierra Nevada. In 2013, the first major disturbance occurred in the form of the Rim Fire. At 1,041 km2, the Rim Fire is the largest fire ever recorded in the Sierra Nevada. Throughout the drought, but particularly in the latter years, there was also epidemic levels of tree mortality, particularly mortality tied to native bark beetles (Dendroctonus spp.) killing pines (Pinus spp.).Using the southern Sierra Nevada as a case study, I investigated the novel insights a next-generation imaging spectroscopy satellite would be able to give in understanding fire and tree mortality globally. Specifically, I showed the potential for imaging spectroscopy based spectral mixture analysis (SMA) as an assessment of fire severity. SMA cover fractions allow for a remotely sensed fire severity metric that would be more readily compared at the global level than those currently in use, such as difference normalized burn ratio (dNBR). I also demonstrated that using the random forest machine learning algorithm, a simulated spaceborne imaging spectrometer would be able to more accurately identify the location of red stage tree mortality compared to existing multispectral satellites such as Landsat. In addition, remote sensing was used to gain an understanding of the impact and drivers of tree mortality. For a 2,240 ha watershed, a model of tree crown locations, with species and height identified, was created based on a combination of high spatial resolution airborne imaging spectroscopy and lidar. Then, high-resolution multispectral imagery was interpreted to determine a tree’s 2016 status. In the area investigated, the net effect of the drought was to reduce the number of live conifer stems taller than 15 m at the crown level by 75%, primarily due to the death of ponderosa pine. Finally, the factors that distinguished conifers that were alive in 2016 from conifers that died between 2015 and 2016, both within the 2,240 ha watershed and across the southern Sierra Nevada were examined. Trees that survived were typically associated with being located in stands with tree species and height class heterogeneity. Stands in wetter, cooler parts of the Sierra Nevada during the drought were also more likely to survive
Data-driven model development in environmental geography - Methodological advancements and scientific applications
Die Erfassung räumlich kontinuierlicher Daten und raum-zeitlicher Dynamiken ist ein Forschungsschwerpunkt der Umweltgeographie. Zu diesem Ziel sind Modellierungsmethoden erforderlich, die es ermöglichen, aus limitierten Felddaten raum-zeitliche Aussagen abzuleiten. Die Komplexität von Umweltsystemen erfordert dabei die Verwendung von Modellierungsstrategien, die es erlauben, beliebige Zusammenhänge zwischen einer Vielzahl potentieller Prädiktoren zu berücksichtigen. Diese Anforderung verlangt nach einem Paradigmenwechsel von der parametrischen hin zu einer nicht-parametrischen, datengetriebenen Modellentwicklung, was zusätzlich durch die zunehmende Verfügbarkeit von Geodaten verstärkt wird.
In diesem Zusammenhang haben sich maschinelle Lernverfahren als ein wichtiges Werkzeug erwiesen, um Muster in nicht-linearen und komplexen Systemen zu erfassen. Durch die wachsende Popularität maschineller Lernverfahren in wissenschaftlichen Zeitschriften und die Entwicklung komfortabler Softwarepakete wird zunehmend der Fehleindruck einer einfachen Anwendbarkeit erzeugt. Dem gegenüber steht jedoch eine Komplexität, die im Detail nur durch eine umfassende Methodenkompetenz kontrolliert werden kann.
Diese Problematik gilt insbesondere für Geodaten, die besondere Merkmale wie vor allem räumliche Abhängigkeit aufweisen, womit sie sich von "gewöhnlichen" Daten abheben, was jedoch in maschinellen Lernanwendungen bisher weitestgehend ignoriert wird.
Die vorliegende Arbeit beschäftigt sich mit dem Potenzial und der Sensitivität des maschinellen Lernens in der Umweltgeographie. In diesem Zusammenhang wurde eine Reihe von maschinellen Lernanwendungen in einem breiten Spektrum der Umweltgeographie veröffentlicht. Die einzelnen Beiträge stehen unter der übergeordneten Hypothese, dass datengetriebene Modellierungsstrategien nur dann zu einem Informationsgewinn und zu robusten raum-zeitlichen Ergebnissen führen, wenn die Merkmale von geographischen Daten berücksichtigt werden. Neben diesem übergeordneten methodischen Fokus zielt jede Anwendung darauf ab, durch adäquat angewandte Methoden neue fachliche Erkenntnisse in ihrem jeweiligen Forschungsgebiet zu liefern.
Im Rahmen der Arbeit wurde eine Vielzahl relevanter Umweltmonitoring-Produkte entwickelt. Die Ergebnisse verdeutlichen, dass sowohl hohe fachwissenschaftliche als auch methodische Kenntnisse unverzichtbar sind, um den Bereich der datengetriebenen Umweltgeographie voranzutreiben. Die Arbeit demonstriert erstmals die Relevanz räumlicher Überfittung in geographischen Lernanwendungen und legt ihre Auswirkungen auf die Modellergebnisse dar. Um diesem Problem entgegenzuwirken, wird eine neue, an Geodaten angepasste Methode zur Modellentwicklung entwickelt, wodurch deutlich verbesserte Ergebnisse erzielt werden können.
Diese Arbeit ist abschließend als Appell zu verstehen, über die Standardanwendungen der maschinellen Lernverfahren hinauszudenken, da sie beweist, dass die Anwendung von Standardverfahren auf Geodaten zu starker Überfittung und Fehlinterpretation der Ergebnisse führt. Erst wenn Eigenschaften von geographischen Daten berücksichtigt werden, bietet das maschinelle Lernen ein leistungsstarkes Werkzeug, um wissenschaftlich verlässliche Ergebnisse für die Umweltgeographie zu liefern
Ecogeomorphological Transformations of Aeolian Form - The Case of a Parabolic Dune, Poland
The range of natural environmental degradation caused by anthropogenic activity may
include geomorphological forms such as dunes resulting from the build-up activity of the wind.
In effect, such environmental transformation affects changes connected not only with their relief,
but also with the presence and health of diverse plant and animal inhabitants. The subject of the
survey was a parabolic dune with asymmetric shape, the sand of which was subjected to exploitation
over many years. Terrain data acquired by means of GNSS (Global Navigation Satellite Systems)
served to elaborate the present relief of the surveyed dune and to reconstruct its primary relief. These
were mainly places where the impacts of human activities were recorded. For this purpose, ordinary
kriging (OK) estimation was performed. Simultaneously, satellite data and UAV (Unmanned Aerial
Vehicle) imaging were acquired, and subjected to image fusion in order to acquire near infrared
bands (NIR), red, green, blue in high spatial resolution. These in turn were applied so as to estimate
the condition of the vegetation overplanting the dune and surrounding terrain. The correctness of
the modelling was verified by cross-validation (CV), which disclosed low error values. Such values
in present and primary relief were, respectively, mean error (ME) at 0.009 and 0.014, root mean
square error (RMSE) at 0.564 and 0.304 and root mean square standardised error (RMSSE) at 0.999 and
1.077. Image fusion, with use of pansharpening allowed a colour-infrared composition (CIR) and a
Modified Chlorophyll Absorption in Reflectance Index 1 (MCARI1) to be obtained. Their analysis
disclosed that vegetation on the dune is characterised by worse health condition as compared with
the surrounding area thereof. The proposed approach enabled the environmental condition of the
surveyed dune to be analysed, and thereby it allows for a determination of the consequences of
further uncontrolled sand recovery without taking into account the historical cartographic materials
customarily considered to be the main source of information
- …