4,656 research outputs found

    Comparison of Classifiers for Radar Emitter Type Identification

    Full text link
    ARTMAP neural network classifiers are considered for the identification of radar emitter types from their waveform parameters. These classifiers can represent radar emitter type classes with one or more prototypes, perform on-line incremental learning to account for novelty encountered in the field, and process radar pulse streams at high speed, making them attractive for real-time applications such as electronic support measures (ESM). The performance of four ARTMAP variants- ARTMAP (Stage 1), ARTMAP-IC, fuzzy ARTMAP and Gaussian ARTMAP - is assessed with radar data gathered in the field. The k nearest neighbor (kNN) and radial basis function (RDF) classifiers are used for reference. Simulation results indicate that fuzzy ARTMAP and Gaussian ARTMAP achieve an average classification rate consistently higher than that of the other ARTMAP classifers and comparable to that of kNN and RBF. ART-EMAP, ARTMAP-IC and fuzzy ARTMAP require fewer training epochs than Gaussian ARTMAP and RBF, and substantially fewer prototype vectors (thus, smaller physical memory requirements and faster fielded performance) than Gaussian ARTMAP, RBF and kNN. Overall, fuzzy ART MAP performs at least as well as the other classifiers in both accuracy and computational complexity, and better than each of them in at least one of these aspects of performance. Incorporation into fuzzy ARTMAP of the MT- feature of ARTMAP-IC is found to be essential for convergence during on-line training with this data set.Defense Advanced Research Projects Agency and the Office of Naval Research (N000I4-95-1-409 (S.G. and M.A.R.); National Science Foundation (IRI-97-20333) (S.G.); Natural Science and Engineering Research Council of Canada (E.G.); Office of Naval Research (N00014-95-1-0657

    Parallel Multistage Wide Neural Network

    Get PDF
    Deep learning networks have achieved great success in many areas such as in large scale image processing. They usually need large computing resources and time, and process easy and hard samples inefficiently in the same way. Another undesirable problem is that the network generally needs to be retrained to learn new incoming data. Efforts have been made to reduce the computing resources and realize incremental learning by adjusting architectures, such as scalable effort classifiers, multi-grained cascade forest (gc forest), conditional deep learning (CDL), tree CNN, decision tree structure with knowledge transfer (ERDK), forest of decision trees with RBF networks and knowledge transfer (FDRK). In this paper, a parallel multistage wide neural network (PMWNN) is presented. It is composed of multiple stages to classify different parts of data. First, a wide radial basis function (WRBF) network is designed to learn features efficiently in the wide direction. It can work on both vector and image instances, and be trained fast in one epoch using subsampling and least squares (LS). Secondly, successive stages of WRBF networks are combined to make up the PMWNN. Each stage focuses on the misclassified samples of the previous stage. It can stop growing at an early stage, and a stage can be added incrementally when new training data is acquired. Finally, the stages of the PMWNN can be tested in parallel, thus speeding up the testing process. To sum up, the proposed PMWNN network has the advantages of (1) fast training, (2) optimized computing resources, (3) incremental learning, and (4) parallel testing with stages. The experimental results with the MNIST, a number of large hyperspectral remote sensing data, CVL single digits, SVHN datasets, and audio signal datasets show that the WRBF and PMWNN have the competitive accuracy compared to learning models such as stacked auto encoders, deep belief nets, SVM, MLP, LeNet-5, RBF network, recently proposed CDL, broad learning, gc forest etc. In fact, the PMWNN has often the best classification performance

    RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement

    Get PDF
    Extreme learning machine (ELM) as an emerging branch of shallow networks has shown its excellent generalization and fast learning speed. However, for blended data, the robustness of ELM is weak because its weights and biases of hidden nodes are set randomly. Moreover, the noisy data exert a negative effect. To solve this problem, a new framework called RMSE-ELM is proposed in this paper. It is a two-layer recursive model. In the first layer, the framework trains lots of ELMs in different groups concurrently, then employs selective ensemble to pick out an optimal set of ELMs in each group, which can be merged into a large group of ELMs called candidate pool. In the second layer, selective ensemble is recursively used on candidate pool to acquire the final ensemble. In the experiments, we apply UCI blended datasets to confirm the robustness of our new approach in two key aspects (mean square error and standard deviation). The space complexity of our method is increased to some degree, but the results have shown that RMSE-ELM significantly improves robustness with slightly computational time compared with representative methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential framework to solve robustness issue of ELM for high-dimensional blended data in the future.Comment: Accepted for publication in Mathematical Problems in Engineering, 09/22/201
    • …
    corecore