8 research outputs found

    SAR automatic target recognition based on convolutional neural networks

    Get PDF
    We propose a multi-modal multi-discipline strategy appropriate for Automatic Target Recognition (ATR) on Synthetic Aperture Radar (SAR) imagery. Our architecture relies on a pre-trained, in the RGB domain, Convolutional Neural Network that is innovatively applied on SAR imagery, and is combined with multiclass Support Vector Machine classification. The multi-modal aspect of our architecture enforces the generalisation capabilities of our proposal, while the multi-discipline aspect bridges the modality gap. Even though our technique is trained in a single depression angle of 17°, average performance on the MSTAR database over a 10-class target classification problem in 15°, 30° and 45° depression is 97.8%. This multi-target and multi-depression ATR capability has not been reported yet in the MSTAR database literature

    SAR image dataset of military ground targets with multiple poses for ATR

    Get PDF
    Automatic Target Recognition (ATR) is the task of automatically detecting and classifying targets. Recognition using Synthetic Aperture Radar (SAR) images is interesting because SAR images can be acquired at night and under any weather conditions, whereas optical sensors operating in the visible band do not have this capability.Existing SAR ATR algorithms have mostly been evaluated using the MSTAR dataset.1 The problem with the MSTAR is that some of the proposed ATR methods have shown good classification performance even when targets were hidden,2 suggesting the presence of a bias in the dataset. Evaluations of SAR ATR techniques arecurrently challenging due to the lack of publicly available data in the SAR domain. In this paper, we present a high resolution SAR dataset consisting of images of a set of ground military target models taken at various aspect angles, The dataset can be used for a fair evaluation and comparison of SAR ATR algorithms. We applied the Inverse Synthetic Aperture Radar (ISAR) technique to echoes from targets rotating on a turntable and illuminated with a stepped frequency waveform. The targets in the database consist of four variants of two 1.7m-long models of T-64 and T-72 tanks. The gun, the turret position and the depression angle are varied to form 26 different sequences of images. The emitted signal spanned the frequency range from 13 GHz to 18 GHz to achieve a bandwidth of 5 GHz sampled with 4001 frequency points. The resolution obtained with respect to the size of the model targets is comparable to typical values obtained using SAR airborne systems. Single polarized images (Horizontal-Horizontal) are generated using the backprojection algorithm.3 A total of 1480 images are produced using a 20° integration angle. The images in the dataset are organized in a suggested training and testing set to facilitate a standard evaluation of SAR ATR algorithms

    Explainability of deep SAR ATR through feature analysis

    Get PDF
    Understanding the decision-making process of deep learning networks is a key challenge which has rarely been investigated for Synthetic Aperture Radar (SAR) images. In this paper, a set of new analytical tools is proposed and applied to a Convolutional Neural Network (CNN) handling Automatic Target Recognition (ATR) on two SAR datasets containing military targets

    Techniques de classification par deep learning et descripteurs pour l'imagerie radar

    No full text
    Autonomous moving platforms carrying radar systems can synthesise long antenna apertures and generate Synthetic Aperture Radar (SAR) images. SAR images provide strategic information for military and civilian applications and they can be acquired day and night under a wide range of weather conditions. Because the interpretation of SAR images is a common challenge, Automatic Target Recognition (ATR) algorithms can help assist with decision-making when the operator is in the loop or when the platforms are fully autonomous. One of the main limitations of developing SAR ATR algorithms is the lack of suitable and publicly available data. Optical images classification, instead, has recently attracted significantly more research interest because of the number of potential applications and the profusion of data. As a result, robust feature-based and deep learning classification methods have been developed for optical imaging that could be applied to the SAR domain. In this thesis, a new Inverse SAR (ISAR) dataset consisting of test and training images acquired under a range of geometrical conditions is presented. In addition, a method is proposed to generate extra synthetic images, by simulating realistic SAR noise on the original images, and increase the training efficiency of classification algorithms that require a wealth of data, such as deep neural networks. A Gaussian Mixture Model (GMM) segmentation approach is adapted to segment single-polarised SAR images of targets. Features proposed to characterise optical images are transferred to the SAR domain to carry out target classification after segmentation and their respective performanceis compared. A new pose-informed deep learning network architecture, that takes into account the effects of target orientation on target appearance in a SAR image, is proposed. The results presented in this thesis show that the use of this architecture provides a significant performance improvement for almost all datasets used in this work over a baseline network. Understanding the decision-making process of deep networks is another key challenge of deep learning. To address this issue, a new set of analytical tools is proposed that enables the identification, amongst other things, of the location of the algorithm focus points that lead to high level classification performance.Une plateforme autonome en mouvement dotĂ©e d'un systĂšme radar peut gĂ©nĂ©rer des images Radar Ă  SynthĂšse d'Ouverture (RSO ou SAR). Ces images fournissent des informations stratĂ©giques pour des applications civiles et militaires. Elles peuvent ĂȘtre acquises de jour commede nuit dans des conditions mĂ©tĂ©orologiques variĂ©es. Des algorithmes visant Ă  la Reconnaissance Automatique de Cible (RAC ou ATR) sont alors utiles pour assister voire automatiser la prise dedĂ©cision. En effet, l’interprĂ©tation de ces images peut ĂȘtre complexe, y compris pour un opĂ©rateur expĂ©rimentĂ©. La classification d'images du domaine visible gĂ©nĂšre un intĂ©rĂȘt important des chercheurs, en partie grĂące Ă  la profusion des donnĂ©es. Par consĂ©quent, des mĂ©thodes robustes de classification par descripteurs et deep learning ont Ă©tĂ© dĂ©veloppĂ©es pour les images visibles. A l’inverse, une problĂ©matique essentielle rencontrĂ©e lors du dĂ©veloppement d'algorithmes pour la RAC RSO est la raretĂ© des donnĂ©es accessibles au public. Une difficultĂ© supplĂ©mentaire est la variabilitĂ© des phĂ©nomĂšnes physiques lors de l’acquisition radar. Les mĂ©thodes de classification des images optiques pourraient ĂȘtre adaptĂ©es pour les images RSO. Une nouvelle base de donnĂ©es d'images RSO Inverse (RSOI ou ISAR) est proposĂ©e dans cette thĂšse. Elle contient des images d'entraĂźnement et de test obtenues dans des configurations variĂ©es. Une technique visant Ă  gĂ©nĂ©rer des images artificielles supplĂ©mentaires est aussi dĂ©veloppĂ©e. L’objectif est d’amĂ©liorer l’efficacitĂ© de l’apprentissage des algorithmes de classification nĂ©cessitant de nombreuses images d'entraĂźnement, tels que les rĂ©seaux de neurones. Cette technique consiste Ă  simuler un bruit SAR rĂ©aliste sur les images initiales. Une segmentation basĂ©e sur des ModĂšles de MĂ©lange de Gaussiennes (MMG ou GMM) est adaptĂ©e Ă  des images RSO Ă  polarisation simple. Des descripteurs conçus pour caractĂ©riser des images optiques sont utilisĂ©s dans le domaine RSO afin de classifier des cibles aprĂšs segmentation et leurs performances respectives sont comparĂ©es. Une nouvelle architecture de rĂ©seau de neurones, appelĂ©e pose-informed, est dĂ©veloppĂ©e. Elle prend en compte les effets de l’orientation de la cible sur son apparence dans les images RSO. Les rĂ©sultats prĂ©sentĂ©s montrent que cette architecture permet une amĂ©lioration significative de la classification par rapport Ă  une architecture standard. Au-delĂ  des performances, un enjeu clĂ©rĂ©side dans l’explicativitĂ© des mĂ©thodes issues du deep learning. Un ensemble d’outils analytiques sont prĂ©sentĂ©s afin faciliter la comprĂ©hension du processus de dĂ©cision du rĂ©seau de neurones. Ils permettent, entre autres, l’identification des zones vues comme essentielles Ă  la classification par le rĂ©seau de neurones

    Independent synchronized control and visualization of interactions between living cells and organisms

    No full text
    International audienceTo investigate the early stages of cell-cell interactions occurring between living biological samples, imaging methods with appropriate spatiotemporal resolution are required. Among the techniques currently available, those based on optical trapping are promising. Methods to image trapped objects, however, in general suffer from a lack of three-dimensional resolution, due to technical constraints. Here, we have developed an original setup comprising two independent modules: holographic optical tweezers, which offer a versatile and precise way to move multiple objects simultaneously but independently, and a confocal microscope that provides fast three-dimensional image acquisition. The optical decoupling of these two modules through the same objective gives users the possibility to easily investigate very early steps in biological interactions. We illustrate the potential of this setup with an analysis of infection by the fungus Drechmeria coniospora of different developmental stages of Caenorhabditis elegans. This has allowed us to identify specific areas on the nematode’s surface where fungal spores adhere preferentially. We also quantified this adhesion process for different mutant nematode strains, and thereby derive insights into the host factors that mediate fungal spore adhesion

    Wrongful Convictions: A Comparative Perspective

    No full text
    corecore