133 research outputs found

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea

    A study on the use of Gabor features for Chinese OCR

    Get PDF
    The authors revisit the topic of Gabor feature extraction for Chinese OCR. We adopt a very simple discriminant function to construct a maximum discriminant function based character recognizer. We experiment with a simple way of forming a feature vector for each character image by extracting Gabor features using one wavelength at locations uniformly sampled with one spatial resolution. Extensive experiments on large vocabulary Chinese OCR for both machine-printed and handwritten characters are performed by using a large amount of training and testing data to demonstrate the effectiveness of the Gabor features for Chinese OCR. Using Gabor features as raw features, we have constructed several state-of-the-art Chinese OCR engines.published_or_final_versio

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Online Handwritten Chinese/Japanese Character Recognition

    Get PDF

    Probabilistic Neural Network based Approach for Handwritten Character Recognition

    Get PDF
    In this paper, recognition system for totally unconstrained handwritten characters for south Indian language of Kannada is proposed. The proposed feature extraction technique is based on Fourier Transform and well known Principal Component Analysis (PCA). The system trains the appropriate frequency band images followed by PCA feature extraction scheme. For subsequent classification technique, Probabilistic Neural Network (PNN) is used. The proposed system is tested on large database containing Kannada characters and also tested on standard COIL-20 object database and the results were found to be better compared to standard techniques
    corecore