435 research outputs found

    From Data Topology to a Modular Classifier

    Full text link
    This article describes an approach to designing a distributed and modular neural classifier. This approach introduces a new hierarchical clustering that enables one to determine reliable regions in the representation space by exploiting supervised information. A multilayer perceptron is then associated with each of these detected clusters and charged with recognizing elements of the associated cluster while rejecting all others. The obtained global classifier is comprised of a set of cooperating neural networks and completed by a K-nearest neighbor classifier charged with treating elements rejected by all the neural networks. Experimental results for the handwritten digit recognition problem and comparison with neural and statistical nonmodular classifiers are given

    Digit and command interpretation for electronic book using neural network and genetic algorithm

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering2004-2005 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    An Adaptive modular neural network with application to unconstrained character recognition

    Get PDF
    "August 1993."Includes bibliographical references (p. 24-27).Supported by the Productivity From Information Technology (PROFIT) Research Initiative at MIT.Lik Mui ... [et al.

    A Biologically Plausible SOM Representation of the Orthographic Form of 50,000 French Words

    No full text
    International audienceRecently, an important aspect of human visual word recognition has been characterized. The letter position is encoded in our brain using an explicit representation of order based on letter pairs: the open-bigram coding [15]. We hypothesize that spelling has evolved in order to minimize reading errors. Therefore, word recognition using bigrams — instead of letters — should be more efficient. First, we study the influence of the size of the neighborhood, which defines the number of bigrams per word, on the performance of the matching between bigrams and word. Our tests are conducted against one of the best recognition solutions used today by the industry, which matches letters to words. Secondly, we build a cortical map representation of the words in the bigram space — which implies numerous experiments in order to achieve a satisfactory projection. Third, we develop an ultra-fast version of the self-organizing map in order to achieve learning in minutes instead of months
    corecore