3,512 research outputs found
Morphological granulometry for classification of evolving and ordered texture images.
In this work we investigate the use of morphological granulometric moments as texture descriptors to predict time or class of texture images which evolve over time or follow an intrinsic ordering of textures. A cubic polynomial regression was used to model each of several granulometric moments as a function of time or class. These models are then combined and used to predict time or class. The methodology was developed on synthetic images of evolving textures and then successfully applied to classify a sequence of corrosion images to a point on an evolution time scale. Classification performance of the new regression approach is compared to that of linear discriminant analysis, neural networks and support vector machines. We also apply our method to images of black tea leaves, which are ordered according to granule size, and very high classification accuracy was attained compared to existing published results for these images. It was also found that granulometric moments provide much improved classification compared to grey level co-occurrence features for shape-based texture images
Classification of Large-Scale High-Resolution SAR Images with Deep Transfer Learning
The classification of large-scale high-resolution SAR land cover images
acquired by satellites is a challenging task, facing several difficulties such
as semantic annotation with expertise, changing data characteristics due to
varying imaging parameters or regional target area differences, and complex
scattering mechanisms being different from optical imaging. Given a large-scale
SAR land cover dataset collected from TerraSAR-X images with a hierarchical
three-level annotation of 150 categories and comprising more than 100,000
patches, three main challenges in automatically interpreting SAR images of
highly imbalanced classes, geographic diversity, and label noise are addressed.
In this letter, a deep transfer learning method is proposed based on a
similarly annotated optical land cover dataset (NWPU-RESISC45). Besides, a
top-2 smooth loss function with cost-sensitive parameters was introduced to
tackle the label noise and imbalanced classes' problems. The proposed method
shows high efficiency in transferring information from a similarly annotated
remote sensing dataset, a robust performance on highly imbalanced classes, and
is alleviating the over-fitting problem caused by label noise. What's more, the
learned deep model has a good generalization for other SAR-specific tasks, such
as MSTAR target recognition with a state-of-the-art classification accuracy of
99.46%
A novel application of deep learning with image cropping: a smart city use case for flood monitoring
© 2020, The Author(s). Event monitoring is an essential application of Smart City platforms. Real-time monitoring of gully and drainage blockage is an important part of flood monitoring applications. Building viable IoT sensors for detecting blockage is a complex task due to the limitations of deploying such sensors in situ. Image classification with deep learning is a potential alternative solution. However, there are no image datasets of gullies and drainages. We were faced with such challenges as part of developing a flood monitoring application in a European Union-funded project. To address these issues, we propose a novel image classification approach based on deep learning with an IoT-enabled camera to monitor gullies and drainages. This approach utilises deep learning to develop an effective image classification model to classify blockage images into different class labels based on the severity. In order to handle the complexity of video-based images, and subsequent poor classification accuracy of the model, we have carried out experiments with the removal of image edges by applying image cropping. The process of cropping in our proposed experimentation is aimed to concentrate only on the regions of interest within images, hence leaving out some proportion of image edges. An image dataset from crowd-sourced publicly accessible images has been curated to train and test the proposed model. For validation, model accuracies were compared considering model with and without image cropping. The cropping-based image classification showed improvement in the classification accuracy. This paper outlines the lessons from our experimentation that have a wider impact on many similar use cases involving IoT-based cameras as part of smart city event monitoring platforms
Unsupervised Classification of SAR Images using Hierarchical Agglomeration and EM
We implement an unsupervised classification algorithm for high resolution Synthetic Aperture Radar (SAR) images. The foundation of algorithm is based on Classification Expectation-Maximization (CEM). To get rid of two drawbacks of EM type algorithms, namely the initialization and the model order selection, we combine the CEM algorithm with the hierarchical agglomeration strategy and a model order selection criterion called Integrated Completed Likelihood (ICL). We exploit amplitude statistics in a Finite Mixture Model (FMM), and a Multinomial Logistic (MnL) latent class label model for a mixture density to obtain spatially smooth class segments. We test our algorithm on TerraSAR-X data
Connection Discovery using Shared Images by Gaussian Relational Topic Model
Social graphs, representing online friendships among users, are one of the
fundamental types of data for many applications, such as recommendation,
virality prediction and marketing in social media. However, this data may be
unavailable due to the privacy concerns of users, or kept private by social
network operators, which makes such applications difficult. Inferring user
interests and discovering user connections through their shared multimedia
content has attracted more and more attention in recent years. This paper
proposes a Gaussian relational topic model for connection discovery using user
shared images in social media. The proposed model not only models user
interests as latent variables through their shared images, but also considers
the connections between users as a result of their shared images. It explicitly
relates user shared images to user connections in a hierarchical, systematic
and supervisory way and provides an end-to-end solution for the problem. This
paper also derives efficient variational inference and learning algorithms for
the posterior of the latent variables and model parameters. It is demonstrated
through experiments with over 200k images from Flickr that the proposed method
significantly outperforms the methods in previous works.Comment: IEEE International Conference on Big Data 201
Lightweight learning from label proportions on satellite imagery
This work addresses the challenge of producing chip level predictions on
satellite imagery when only label proportions at a coarser spatial geometry are
available, typically from statistical or aggregated data from administrative
divisions (such as municipalities or communes). This kind of tabular data is
usually widely available in many regions of the world and application areas
and, thus, its exploitation may contribute to leverage the endemic scarcity of
fine grained labelled data in Earth Observation (EO). This can be framed as a
Learning from Label Proportions (LLP) problem setup. LLP applied to EO data is
still an emerging field and performing comparative studies in applied scenarios
remains a challenge due to the lack of standardized datasets. In this work,
first, we show how simple deep learning and probabilistic methods generally
perform better than standard more complex ones, providing a surprising level of
finer grained spatial detail when trained with much coarser label proportions.
Second, we provide a set of benchmarking datasets enabling comparative LLP
applied to EO, providing both fine grained labels and aggregated data according
to existing administrative divisions. Finally, we argue how this approach might
be valuable when considering on-orbit inference and training. Source code is
available at https://github.com/rramosp/llpeoComment: 16 pages, 13 figure
Continental-scale land cover mapping at 10 m resolution over Europe (ELC10)
Widely used European land cover maps such as CORINE are produced at medium
spatial resolutions (100 m) and rely on diverse data with complex workflows
requiring significant institutional capacity. We present a high resolution (10
m) land cover map (ELC10) of Europe based on a satellite-driven machine
learning workflow that is annually updatable. A Random Forest classification
model was trained on 70K ground-truth points from the LUCAS (Land Use/Cover
Area frame Survey) dataset. Within the Google Earth Engine cloud computing
environment, the ELC10 map can be generated from approx. 700 TB of Sentinel
imagery within approx. 4 days from a single research user account. The map
achieved an overall accuracy of 90% across 8 land cover classes and could
account for statistical unit land cover proportions within 3.9% (R2 = 0.83) of
the actual value. These accuracies are higher than that of CORINE (100 m) and
other 10-m land cover maps including S2GLC and FROM-GLC10. We found that
atmospheric correction of Sentinel-2 and speckle filtering of Sentinel-1
imagery had minimal effect on enhancing classification accuracy (< 1%).
However, combining optical and radar imagery increased accuracy by 3% compared
to Sentinel-2 alone and by 10% compared to Sentinel-1 alone. The conversion of
LUCAS points into homogenous polygons under the Copernicus module increased
accuracy by <1%, revealing that Random Forests are robust against contaminated
training data. Furthermore, the model requires very little training data to
achieve moderate accuracies - the difference between 5K and 50K LUCAS points is
only 3% (86 vs 89%). At 10-m resolution, the ELC10 map can distinguish detailed
landscape features like hedgerows and gardens, and therefore holds potential
for aerial statistics at the city borough level and monitoring property-level
environmental interventions (e.g. tree planting)
- …