906 research outputs found

    AUTOMATIC ASSESSMENT MARK ENTRY SYSTEM USING LOCAL BINARY PATTERN (LBP)

    Get PDF
    Offline handwritten recognition continues to be a fundamental research problem in document analysis and retrieval. The common method used in extracting handwritten mark from assessment forms is to assign a person to manually type in the marks into a spreadsheet. This method is found to be very time consuming, not cost effective and prone to human mistakes. In this project, a number recognition system is developed using local binary pattern (LBP) technique to extract and convert students’ identity numbers and handwritten marks on assessment forms into a spreadsheet. The template of the score sheet is designed as in Appendix 1 to collect sample of handwritten numbers. The training data contain three sets of LBP histograms for each digit. The recognition rate of handwritten digits using LBP is about 50% because LBP could not fully describe the structure of the digits. Instead, LBP is useful in term of arranging the digits ‘0 to 9’ from highest similarity score to the lowest similarity score as compared to sample using chi square distance. The recognition rate is greatly improved to about 95% by verifying the output of chi square distance with the salient structural features of digits

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    An IoT System for Converting Handwritten Text to Editable Format via Gesture Recognition

    Get PDF
    Evaluation of traditional classroom has led to electronic classroom i.e. e-learning. Growth of traditional classroom doesn’t stop at e-learning or distance learning. Next step to electronic classroom is a smart classroom. Most popular features of electronic classroom is capturing video/photos of lecture content and extracting handwriting for note-taking. Numerous techniques have been implemented in order to extract handwriting from video/photo of the lecture but still the deficiency of few techniques can be resolved, and which can turn electronic classroom into smart classroom. In this thesis, we present a real-time IoT system to convert handwritten text into editable format by implementing hand gesture recognition (HGR) with Raspberry Pi and camera. Hand Gesture Recognition (HGR) is built using edge detection algorithm and HGR is used in this system to reduce computational complexity of previous systems i.e. removal of redundant images and lecture’s body from image, recollecting text from previous images to fill area from where lecture’s body has been removed. Raspberry Pi is used to retrieve, perceive HGR and to build a smart classroom based on IoT. Handwritten images are converted into editable format by using OpenCV and machine learning algorithms. In text conversion, recognition of uppercase and lowercase alphabets, numbers, special characters, mathematical symbols, equations, graphs and figures are included with recognition of word, lines, blocks, and paragraphs. With the help of Raspberry Pi and IoT, the editable format of lecture notes is given to students via desktop application which helps students to edit notes and images according to their necessity

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    AUTOMATIC ASSESSMENT MARK ENTRY SYSTEM USING LOCAL BINARY PATTERN (LBP)

    Get PDF
    Offline handwritten recognition continues to be a fundamental research problem in document analysis and retrieval. The common method used in extracting handwritten mark from assessment forms is to assign a person to manually type in the marks into a spreadsheet. This method is found to be very time consuming, not cost effective and prone to human mistakes. In this project, a number recognition system is developed using local binary pattern (LBP) technique to extract and convert students’ identity numbers and handwritten marks on assessment forms into a spreadsheet. The template of the score sheet is designed as in Appendix 1 to collect sample of handwritten numbers. The training data contain three sets of LBP histograms for each digit. The recognition rate of handwritten digits using LBP is about 50% because LBP could not fully describe the structure of the digits. Instead, LBP is useful in term of arranging the digits ‘0 to 9’ from highest similarity score to the lowest similarity score as compared to sample using chi square distance. The recognition rate is greatly improved to about 95% by verifying the output of chi square distance with the salient structural features of digits

    An Improved GA Based Modified Dynamic Neural Network for Cantonese-Digit Speech Recognition

    Get PDF
    Author name used in this publication: F. H. F. Leung2007-2008 > Academic research: refereed > Chapter in an edited book (author)published_fina

    American Sign Language Recognition Using Machine Learning and Computer Vision

    Get PDF
    Speech impairment is a disability which affects an individual’s ability to communicate using speech and hearing. People who are affected by this use other media of communication such as sign language. Although sign language is ubiquitous in recent times, there remains a challenge for non-sign language speakers to communicate with sign language speakers or signers. With recent advances in deep learning and computer vision there has been promising progress in the fields of motion and gesture recognition using deep learning and computer vision-based techniques. The focus of this work is to create a vision-based application which offers sign language translation to text thus aiding communication between signers and non-signers. The proposed model takes video sequences and extracts temporal and spatial features from them. We then use Inception, a CNN (Convolutional Neural Network) for recognizing spatial features. We then use an RNN (Recurrent Neural Network) to train on temporal features. The dataset used is the American Sign Language Dataset
    corecore