123 research outputs found

    Deep Learning Techniques in Radar Emitter Identification

    Get PDF
    In the field of electronic warfare (EW), one of the crucial roles of electronic intelligence is the identification of radar signals. In an operational environment, it is very essential to identify radar emitters whether friend or foe so that appropriate radar countermeasures can be taken against them. With the electromagnetic environment becoming increasingly complex and the diversity of signal features, radar emitter identification with high recognition accuracy has become a significantly challenging task. Traditional radar identification methods have shown some limitations in this complex electromagnetic scenario. Several radar classification and identification methods based on artificial neural networks have emerged with the emergence of artificial neural networks, notably deep learning approaches. Machine learning and deep learning algorithms are now frequently utilized to extract various types of information from radar signals more accurately and robustly. This paper illustrates the use of Deep Neural Networks (DNN) in radar applications for emitter classification and identification. Since deep learning approaches are capable of accurately classifying complicated patterns in radar signals, they have demonstrated significant promise for identifying radar emitters. By offering a thorough literature analysis of deep learning-based methodologies, the study intends to assist researchers and practitioners in better understanding the application of deep learning techniques to challenges related to the classification and identification of radar emitters. The study demonstrates that DNN can be used successfully in applications for radar classification and identification.   &nbsp

    Deep learning-enabled technologies for bioimage analysis.

    Get PDF
    Deep learning (DL) is a subfield of machine learning (ML), which has recently demonstrated its potency to significantly improve the quantification and classification workflows in biomedical and clinical applications. Among the end applications profoundly benefitting from DL, cellular morphology quantification is one of the pioneers. Here, we first briefly explain fundamental concepts in DL and then we review some of the emerging DL-enabled applications in cell morphology quantification in the fields of embryology, point-of-care ovulation testing, as a predictive tool for fetal heart pregnancy, cancer diagnostics via classification of cancer histology images, autosomal polycystic kidney disease, and chronic kidney diseases

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    Ensembles of Deep Learning Architectures for the Early Diagnosis of the Alzheimer’s Disease.

    Get PDF
    Computer Aided Diagnosis (CAD) constitutes an important tool for the early diagnosis of Alzheimer’s Disease (AD), which, in turn, allows the application of treatments that can be simpler and more likely to be effective. This paper explores the construction of classification methods based on deep learning architectures applied on brain regions defined by the Automated Anatomical Labeling (AAL). Gray Matter (GM) images from each brain area have been split into 3D patches according to the regions defined by the AAL atlas and these patches are used to train different deep belief networks. An ensemble of deep belief networks is then composed where the final prediction is determined by a voting scheme. Two deep learning based structures and four different voting schemes are implemented and compared, giving as a result a potent classification architecture where discriminative features are computed in an unsupervised fashion. The resulting method has been evaluated using a large dataset from the Alzheimer’s disease Neuroimaging Initiative (ADNI). Classification results assessed by cross-validation prove that the proposed method is not only valid for differentiate between controls (NC) and AD images, but it also provides good performances when tested for the more challenging case of classifying Mild Cognitive Impairment (MCI) Subjects. In particular, the classification architecture provides accuracy values up to 0.90 and AUC of 0.95 for NC/AD classification, 0.84 and AUC of 0.91 for stable MCI/AD classification and 0.83 and AUC of 0.95 for NC/MCI converters classification.This work was partly supported by the MICINN un der the projects TEC2012-34306 and PSI2015-65848- R, and the Consejer´ıa de Innovaci´on, Ciencia y Em presa (Junta de Andaluc´ıa, Spain) under the Ex cellence Projects P09-TIC-4530, P11-TIC-7103 and the Universidad de M´alaga. Programa de fortalec imiento de las capacidades de I+D+I en las Uni versidades 2014-2015, de la Consejer´ıa de Econom´ıa, Innovaci´on, Ciencia y Empleo, cofinanciado por el fondo europeo de desarrollo regional (FEDER) un der the project FC14-SAF30. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Ini tiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bio engineering, and through generous contributions from the following: AbbVie, Alzheimer’s Associa tion; Alzheimer’s Drug Discovery Foundation; Ara clon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; Eu roImmun; F. Hoffmann-La Roche Ltd and its affili ated company Genentech, Inc.; Fujirebio; GE Health care; IXICO Ltd.; Janssen Alzheimer Immunother apy Research & Development, LLC.;Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity ; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Re search is providing funds to support ADNI clinical sites in Canada. Private sector contributions are fa cilitated by the Foundation for the National Insti tutes of Health (www.fnih.org). The grantee organi zation is the Northern California Institute for Re search and Education, and the study is coordinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California

    Machine learning methods for discriminating natural targets in seabed imagery

    Get PDF
    The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems. These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation. Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world sonar mosaic imagery. A number of technical challenges arose and these were surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation of pockmark and Sabellaria discrimination is feasible within this framework

    Sonar image interpretation for sub-sea operations

    Get PDF
    Mine Counter-Measure (MCM) missions are conducted to neutralise underwater explosives. Automatic Target Recognition (ATR) assists operators by increasing the speed and accuracy of data review. ATR embedded on vehicles enables adaptive missions which increase the speed of data acquisition. This thesis addresses three challenges; the speed of data processing, robustness of ATR to environmental conditions and the large quantities of data required to train an algorithm. The main contribution of this thesis is a novel ATR algorithm. The algorithm uses features derived from the projection of 3D boxes to produce a set of 2D templates. The template responses are independent of grazing angle, range and target orientation. Integer skewed integral images, are derived to accelerate the calculation of the template responses. The algorithm is compared to the Haar cascade algorithm. For a single model of sonar and cylindrical targets the algorithm reduces the Probability of False Alarm (PFA) by 80% at a Probability of Detection (PD) of 85%. The algorithm is trained on target data from another model of sonar. The PD is only 6% lower even though no representative target data was used for training. The second major contribution is an adaptive ATR algorithm that uses local sea-floor characteristics to address the problem of ATR robustness with respect to the local environment. A dual-tree wavelet decomposition of the sea-floor and an Markov Random Field (MRF) based graph-cut algorithm is used to segment the terrain. A Neural Network (NN) is then trained to filter ATR results based on the local sea-floor context. It is shown, for the Haar Cascade algorithm, that the PFA can be reduced by 70% at a PD of 85%. Speed of data processing is addressed using novel pre-processing techniques. The standard three class MRF, for sonar image segmentation, is formulated using graph-cuts. Consequently, a 1.2 million pixel image is segmented in 1.2 seconds. Additionally, local estimation of class models is introduced to remove range dependent segmentation quality. Finally, an A* graph search is developed to remove the surface return, a line of saturated pixels often detected as false alarms by ATR. The A* search identifies the surface return in 199 of 220 images tested with a runtime of 2.1 seconds. The algorithm is robust to the presence of ripples and rocks

    Object Recognition in 3D data using Capsules

    Get PDF
    The proliferation of 3D sensors induced 3D computer vision research for many application areas including virtual reality, autonomous navigation and surveillance. Recently, dierent methods have been proposed for 3D object classication. Many of the existing 2D and 3D classication methods rely on convolutional neural networks (CNNs), which are very successful in extracting features from the data. However, CNNs cannot address the spatial relationship between features due to the max-pooling layers, and they require vast amount of data for training. In this work, we propose a model architecture for 3D object classication, which is an extension of Capsule Networks (CapsNets) to 3D data. Our proposed architecture called 3D CapsNet, takes advantage of the fact that a CapsNet preserves the orientation and spatial relationship of the extracted features, and thus requires less data to train the network. We use ModelNet database, a comprehensive clean collection of 3D CAD models for objects, to train and test the 3D CapsNet model. We then compare our approach with ShapeNet, a deep belief network for object classication based on CNNs, and show that our method provides performance improvement especially when training data size gets smaller
    • …
    corecore