1,608 research outputs found

    Comparative Performance Measures of Fuzzy ARTMAP, Learned Vector Quantization, and Back Propagation for Handwritten Character Recognition

    Full text link
    This article compares the performance of Fuzzy ARTMAP with that of Learned Vector Quantization and Back Propagation on a handwritten character recognition task. Training with Fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with Fuzzy ARTMAP yielded the highest recognition rates.British Petroleum (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI 90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (90-0175); Sharp Corporation, Information System Research & Development Center, Nara, Japa

    Comparison on Logistic Regression, Random Forest, and CNN for Handwritten Digit Recognition

    Get PDF
    The technology of handwritten digit and character recognition is the process of identifying handwritten numbers using computers or other devices, machine train itself to recognize the handwritten digits or characters from various sources like bank cheque, mails, images, etc. This paper is about CNN, Logistic Regression, and Random Forest algorithm in handwritten digit or character recognition system, the system works on MNIST dataset for training and testing the models, to get the best accuracy this work rewrites CNN, Logistic Regression, and Random Forest with python libraries. Finally, these algorithms are analyzed by comparing the accuracy and recognition duration. Where CNN got better accuracy than other algorithm

    Unsupervised Learning with Self-Organizing Spiking Neural Networks

    Full text link
    We present a system comprising a hybridization of self-organized map (SOM) properties with spiking neural networks (SNNs) that retain many of the features of SOMs. Networks are trained in an unsupervised manner to learn a self-organized lattice of filters via excitatory-inhibitory interactions among populations of neurons. We develop and test various inhibition strategies, such as growing with inter-neuron distance and two distinct levels of inhibition. The quality of the unsupervised learning algorithm is evaluated using examples with known labels. Several biologically-inspired classification tools are proposed and compared, including population-level confidence rating, and n-grams using spike motif algorithm. Using the optimal choice of parameters, our approach produces improvements over state-of-art spiking neural networks

    Cyclic Self-Organizing Map for Object Recognition

    Get PDF
    Object recognition is an important machine learning (ML) application. To have a robust ML application, we need three major steps: (1) preprocessing (i.e. preparing the data for the ML algorithms); (2) using appropriate segmentation and feature extraction algorithms to abstract the core features data and (3) applying feature classification or feature recognition algorithms. The quality of the ML algorithm depends on a good representation of the data. Data representation requires the extraction of features with an appropriate learning rate. Learning rate influences how the algorithm will learn about the data or how the data will be processed and treated. Generally, this parameter is found on a trial-and-error basis and scholars sometimes set it to be constant. This paper presents a new optimization technique for object recognition problems called Cyclic-SOM by accelerating the learning process of the self-organizing map (SOM) using a non-constant learning rate. SOM uses the Euclidean distance to measure the similarity between the inputs and the features maps. Our algorithm considers image correlation using mean absolute difference instead of traditional Euclidean distance. It uses cyclical learning rates to get high performance with a better recognition rate. Cyclic-SOM possesses the following merits: (1) it accelerates the learning process and eliminates the need to experimentally find the best values and schedule for the learning rates; (2) it offers one form of improvement in both results and training; (3) it requires no manual tuning of the learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyper-parameters and (4) it shows promising results compared to other methods on different datasets. Three wide benchmark databases illustrate the efficiency of the proposed technique: AHD Base for Arabic digits, MNIST for English digits, and CMU-PIE for faces

    Digit and command interpretation for electronic book using neural network and genetic algorithm

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering2004-2005 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    A Neural Network Architecture for Autonomous Learning, Recognition, and Prediction in a Nonstationary World

    Full text link
    In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and narne novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures--called ARTMAP-- that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of detail in the learned attentional focus is determined moment-by-moment, depending on predictive success: an error due to over-generalization automatically focuses attention on additional input details enough of which are learned in a new recognition category so that the predictive error will not be repeated. An ARTMAP system creates an evolving map between a variable number of learned categories that compress one feature space (e.g., visual features) to learned categories of another feature space (e.g., auditory features). Input vectors can be either binary or analog. Computational properties of the networks enable them to perform significantly better in benchmark studies than alternative machine learning, genetic algorithm, or neural network models. Some of the critical problems that challenge and constrain any such autonomous learning system will next be illustrated. Design principles that work together to solve these problems are then outlined. These principles are realized in the ARTMAP architecture, which is specified as an algorithm. Finally, ARTMAP dynamics are illustrated by means of a series of benchmark simulations.Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-J-4015); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225
    corecore