59,165 research outputs found
PATCH-BASED SAR IMAGE CLASSIFICATION: THE POTENTIAL OF MODELING THE STATISTICAL DISTRIBUTION OF PATCHES WITH GAUSSIAN MIXTURES
International audienceDue to their coherent nature, SAR (Synthetic Aperture Radar) images are very different from optical satellite images and more difficult to interpret, especially because of speckle noise. Given the increasing amount of available SAR data, efficient image processing techniques are needed to ease the analysis. Classifying this type of images, i.e., selecting an adequate label for each pixel, is a challenging task. This paper describes a supervised classification method based on local features derived from a Gaussian mixture model (GMM) of the distribution of patches. First classification results are encouraging and suggest an interesting potential of the GMM model for SAR imaging
Kohonen-Based Credal Fusion of Optical and Radar Images for Land Cover Classification
International audienceThis paper presents a Credal algorithm to perform land cover classification from a pair of optical and radar remote sensing images. SAR (Synthetic Aperture Radar) /optical multispectral information fusion is investigated in this study for making the joint classification. The approach consists of two main steps: 1) relevant features extraction applied to each sensor in order to model the sources of information and 2) a Kohonen map-based estimation of Basic Belief Assignments (BBA) dedicated to heterogeneous data. This framework deals with co-registered images and is able to handle complete optical data as well as optical data affected by missing value due to the presence of clouds and shadows during observation. A pair of SPOT-5 and RADARSAT-2 real images is used in the evaluation, and the proposed experiment in a farming area shows very promising results in terms of classification accuracy and missing optical data reconstruction when some data are hidden by clouds
Multiple drone classification using millimeter-wave CW radar micro-Doppler data
Funding: Army Research Laboratory under Cooperative Agreement Number: W911NF-19-2-0075.This paper investigates the prospect of classifying different types of rotary wing drones using radar. The proposed method is based on the hypothesis that the rotor blades of different sizes and shapes will exhibit distinct Doppler features. When sampled unambiguously, these features can be properly extracted and then can be used for classification. We investigate various continuous wave (CW) spectrogram features of different drones obtained with a low phase noise, coherent radar operating at 94 GHz. Two quadcopters of different sizes (DJI Phantom Standard 3 and Joyance JT5L-404) and a hexacopter (DJI S900) have been used during the experimental trial for data collection. For classification training, we first show the limitation of the feature extraction based method. We then propose a convolutional neural network (CNN) based approach in which the classification training is done by using micro-Doppler spectrogram images. We have created an extensive dataset of spectrogram images for classification training, which have been fed to the existing GoogLeNet model. The trained model then has been tested with unseen and unlabelled data for performance verification. Validation accuracy of above 99% is achieved along with very accurate testing results, demonstrating the potential of using neural networks for multiple drone classification.Postprin
Unsupervised Classification of SAR Images using Hierarchical Agglomeration and EM
We implement an unsupervised classification algorithm for high resolution Synthetic Aperture Radar (SAR) images. The foundation of algorithm is based on Classification Expectation-Maximization (CEM). To get rid of two drawbacks of EM type algorithms, namely the initialization and the model order selection, we combine the CEM algorithm with the hierarchical agglomeration strategy and a model order selection criterion called Integrated Completed Likelihood (ICL). We exploit amplitude statistics in a Finite Mixture Model (FMM), and a Multinomial Logistic (MnL) latent class label model for a mixture density to obtain spatially smooth class segments. We test our algorithm on TerraSAR-X data
Unsupervised Classification of SAR Images using Hierarchical Agglomeration and EM
We implement an unsupervised classification algorithm for high resolution Synthetic Aperture Radar (SAR) images. The foundation of algorithm is based on Classification Expectation-Maximization (CEM). To get rid of two drawbacks of EM type algorithms, namely the initialization and the model order selection, we combine the CEM algorithm with the hierarchical agglomeration strategy and a model order selection criterion called Integrated Completed Likelihood (ICL). We exploit amplitude statistics in a Finite Mixture Model (FMM), and a Multinomial Logistic (MnL) latent class label model for a mixture density to obtain spatially smooth class segments. We test our algorithm on TerraSAR-X data
Recommended from our members
SAR image segmentation with GMMs
This paper proposes a new approach for Synthetic Aperture Radar (SAR) image segmentation. Segmenting SAR images can be challenging because of the blurry edges and the high speckle. The segmentation proposed is based on a machine learning technique. Gaussian Mixture Models (GMMs) were already used to segment images in the visual field and are here adapted to work with single channel SAR images. The segmentation suggested is designed to be a first step towards feature and model based classification. The recall rate is the most important as the goal is to retain most target's features. A high recall rate of 88%, higher than for other segmentation methods on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, was obtained. The next classification stage is thus not affected by a lack of information while its computation load drops. With this method, the inclusion of disruptive features in models of targets is limited, providing computationally lighter models and a speed up in further classification as the narrower segmented areas foster convergence of models and provide refined features to compare. This segmentation method is hence an asset to template, feature and model based classification methods. Besides this method, a comparison between variants of the GMMs segmentation and a classical segmentation is provided
Sea surface wind and wave parameter estimation from X-band marine radar images with rain detection and mitigation
In this research, the application of X-band marine radar backscatter images for sea surface
wind and wave parameter estimation with rain detection and mitigation is investigated.
In the presence of rain, the rain echoes in the radar image blur the wave signatures
and negatively affect estimation accuracy. Hence, in order to improve estimation accuracy,
it is meaningful to detect the presence of those rain echoes and mitigate their influence on
estimation results. Since rain alters radar backscatter intensity distribution, features are extracted
from the normalized histogram of each radar image. Then, a support vector machine
(SVM)-based rain detection model is proposed to classify radar images obtained between
rainless and rainy conditions. The classification accuracy shows significant improvement
compared to the existing threshold-based method. By further observing images obtained
under rainy conditions, it is found that many of them are only partially contaminated by rain
echoes. Therefore, in order to segment between rain-contaminated regions and those that
are less or unaffected by rain, two types of methods are developed based on unsupervised
learning techniques and convolutional neural network (CNN), respectively. Specifically, for
the unsupervised learning-based method, texture features are first extracted from each pixel
and then trained using a self organizing map (SOM)-based clustering model, which is able
to conduct pixel-based identification of rain-contaminated regions. As for the CNN-based
method, a SegNet-based semantic segmentation CNN is �rst designed and then trained using
images with manually annotated labels. Both shipborne and shore-based marine radar
data are used to train and validate the proposed methods and high classification accuracies
of around 90% are obtained.
Due to the similarities between how haze affects terrestrial images and how rain affects
marine radar images, a type of CNN for image dehazing purposes, i.e., DehazeNet, is
applied to rain-contaminated regions in radar images for correcting the in
uence of rain,
which reduces the estimation error of wind direction significantly. Besides, after extracting
histogram and texture features from rain-corrected radar images, a support vector regression
(SVR)-based model, which achieves high estimation accuracy, is trained for wind speed
estimation. Finally, a convolutional gated recurrent unit (CGRU) network is designed and
trained for significant wave height (SWH) estimation. As an end-to-end system, the proposed
network is able to generate estimation results directly from radar image sequences
by extracting multi-scale spatial and temporal features in radar image sequences automatically.
Compared to the classic signal-to-noise (SNR)-based method, the CGRU-based model
shows significant improvement in both estimation accuracy (under both rainless and rainy
conditions) and computational efficiency
Semi-supervised classification of polarimetric SAR images using Markov random field and two-level Wishart mixture model
In this work, we propose a semi-supervised method for classification of polarimetric synthetic aperture radar (PolSAR) images. In the proposed method, a 2-level mixture model is constructed by associating each component density with a unique Wishart mixture model (instead of a single Wishart distribution as that in the conventional Wishart mixture model). This modeling scheme facilitates the accurate description of data for the categories, each of which includes multiple subcategories. The learning algorithm for the proposed model is developed based on variational inference and all the update equations are obtained in closed form. In the learning algorithm, the spatial interdependencies are incorporated by imposing a Markov random field prior on the indicator variable to alleviate the speckle effect on the classification results. The experimental results demonstrate the improved performance of the proposed method compared with the unsupervised version and supervised version of the proposed model as well as an existing method for semi-supervised classification
Improved Difference Images for Change Detection Classifiers in SAR Imagery Using Deep Learning
Satellite-based Synthetic Aperture Radar (SAR) images can be used as a source
of remote sensed imagery regardless of cloud cover and day-night cycle.
However, the speckle noise and varying image acquisition conditions pose a
challenge for change detection classifiers. This paper proposes a new method of
improving SAR image processing to produce higher quality difference images for
the classification algorithms. The method is built on a neural network-based
mapping transformation function that produces artificial SAR images from a
location in the requested acquisition conditions. The inputs for the model are:
previous SAR images from the location, imaging angle information from the SAR
images, digital elevation model, and weather conditions. The method was tested
with data from a location in North-East Finland by using Sentinel-1 SAR images
from European Space Agency, weather data from Finnish Meteorological Institute,
and a digital elevation model from National Land Survey of Finland. In order to
verify the method, changes to the SAR images were simulated, and the
performance of the proposed method was measured using experimentation where it
gave substantial improvements to performance when compared to a more
conventional method of creating difference images
- …