475,962 research outputs found

    A delay in processing for repeated letters : evidence from megastudies

    Get PDF
    Repetitions of letters in words are frequent in many languages. Here we explore whether these repetitions affect word recognition. Previous studies of word processing have not provided conclusive evidence of differential processing between repeated and unique letter identities. In the present study, to achieve greater power, we used regression analyses on existing mega-studies of visual word recognition latencies. In both lexical decision (in English, Dutch, and French) and word naming (in English), there was strong evidence that repeated letters delay visual word recognition after major covariates are partialed out. This delay was most robust when the repeated letters occurred in close proximity but not in immediate adjacency to each other. Simulations indicated that the observed inhibitory pattern of repeated letters was not predicted by three leading visual word recognition models. Future theorizing in visual word recognition will need to take account of this inhibitory pattern. It remains to be seen whether the appropriate adjustment should occur in the representation of letter position and identity, or in a more precise description of earlier visual processes

    During visual word recognition, phonology is accessed within 100 ms and may be mediated by a speech production code: evidence from magnetoencephalography

    Get PDF
    Debate surrounds the precise cortical location and timing of access to phonological information during visual word recognition. Therefore, using whole-head magnetoencephalography (MEG), we investigated the spatiotemporal pattern of brain responses induced by a masked pseudohomophone priming task. Twenty healthy adults read target words that were preceded by one of three kinds of nonword prime: pseudohomophones (e.g., brein–BRAIN), where four of five letters are shared between prime and target, and the pronunciation is the same; matched orthographic controls (e.g., broin–BRAIN), where the same four of five letters are shared between prime and target but pronunciation differs; and unrelated controls (e.g., lopus–BRAIN), where neither letters nor pronunciation are shared between prime and target. All three priming conditions induced activation in the pars opercularis of the left inferior frontal gyrus (IFGpo) and the left precentral gyrus (PCG) within 100 ms of target word onset. However, for the critical comparison that reveals a processing difference specific to phonology, we found that the induced pseudohomophone priming response was significantly stronger than the orthographic priming response in left IFG/PCG at ∼100 ms. This spatiotemporal concurrence demonstrates early phonological influences during visual word recognition and is consistent with phonological access being mediated by a speech production code

    Fast human activity recognition based on structure and motion

    Get PDF
    This is the post-print version of the final paper published in Pattern Recognition Letters. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.We present a method for the recognition of human activities. The proposed approach is based on the construction of a set of templates for each activity as well as on the measurement of the motion in each activity. Templates are designed so that they capture the structural and motion information that is most discriminative among activities. The direct motion measurements capture the amount of translational motion in each activity. The two features are fused at the recognition stage. Recognition is achieved in two steps by calculating the similarity between the templates and motion features of the test and reference activities. The proposed methodology is experimentally assessed and is shown to yield excellent performance.European Commissio

    RECOGNITION OF REAL-TIME HANDWRITTEN CHARACTERS USING CONVOLUTIONAL NEURAL NETWORK ARCHITECTURE

    Get PDF
    Pattern recognition, including handwriting recognition, has become increasingly common in everyday life, as is recognizing important files, agreements or contracts that use handwriting. In handwriting recognition, there are two types of methods commonly used, namely online and offline recognition. In online recognition, handwriting patterns are associated with pattern recognition to generate and select distinctive patterns. In handwritten letter patterns, machine learning (deep learning) is used to classify patterns in a data set. One of the popular and accurate deep learning models in image classification is the convolutional neural network (CNN). In this study, CNN will be implemented together with the OpenCV library to detect and recognize handwritten letters in real-time. Data on handwritten alphabet letters were obtained from the handwriting of 20 students with a total of 1,040 images, consisting of 520 uppercase (A-Z) images and 520 lowercase (a-z) images. The data is divided into 90% for training and 10% for testing. Through experimentation, it was found that the best CNN architecture has 5 layers with features (32, 32, 64, 64, 128), uses the Adam optimizer, and conducts training with a batch size of 20 and 100 epochs. The evaluation results show that the training accuracy is between 85, 90% to 89.83% and testing accuracy between 84.00% to 87.00%, with training and testing losses ranging from 0.322 to 0.499. This research produces the best CNN architecture with training and testing accuracy obtained from testing. The developed CNN model can be used as a reference or basis for the development of more complex handwriting pattern recognition models or for pattern recognition in other domains, such as object recognition in computer vision, facial recognition, and other object detection

    Application of Partial Least Squares Linear Discriminant Function to Writer Identification in Pattern Recognition Analysis

    Get PDF
    Partial least squares linear discriminant function (PLSD) as well as ordinary linear discriminant function (LDF) are used in pattern recognition analysis of writer identification based on are patterns extracted from the writings written with Hangul letters by 20 Koreans. Also a simulation study is performed using the Monte Carlo method to compare the performances of PLSD and LDF. PLSD showed remarkably better performance than LDF in the Monte Calro study and slightly better performance in the analysis of the real pattern recognition data

    Spatial-frequency spectra of printed characters and human visual perception

    Get PDF
    AbstractIt is well known that certain spatial frequency (SF) bands are more important than others for character recognition. Solomon and Pelli [Nature 369 (1994) 395–397] have concluded that human pattern recognition mechanism is able to use only a narrow band from available SF spectrum of letters. However, the SF spectra of letters themselves have not been studied carefully. Here I report the results of an analysis of SF spectra of printed characters and discuss their relationship to the observed band-pass nature of letter recognition

    SYSTEM IN LETTER IMAGE RECOGNITION USING ZONING FEATURE EXTRACTION AND INTEGRAL PROJECTION FEATURE EXTRACTION

    Get PDF
    Abstrack Pattern recognition system has been widely applied in pattern recognition, especially the letters image. In letter pattern recognition, feature extraction process is also required to obtain characteristics and specific feature of each image to be recognizable. There are various kinds of extraction of characteristics that can be used in the process of pattern recognition. In this research used two feature extraction of zoning and integral projection. In this study there are several stages of the process undertaken the design and implementation of systems for image object recognition alphabet capital letters A, I, U, E, O, B, C, D, F, G with a font style Arial created with Microsoft Word and printed on paper. The first stage is capturing which is the process of taking pictures the image of the letter. The second stage is the conversion of RGB image of the letter to the image intensity and image intensity will be segmented using bi-level luminance thresholding method. The next process is the labeling and filtering, followed by the process of feature extraction using zoning and integral projection methods and the results of this extraction will be recognized by the template system matching. Tests conducted on 450 images obtained letters from the image of alphabet capital letters A, I, U, E, O, B, C, D, F, G with font size 30, 40, 45, 50, 60, 65, 70, 80 , 85, 90, 100, 105, 110, 115, 120 and the distance between camera and letter image are 15 cm, 20 cm and 25 cm. In this study would also analyze the performance of template matching system for each feature extraction of test data that has previously been used as a database system and the test data outside the database system. Test results show that the template matching will work optimally if the data are used as test data, have previously been used as a database system and the system will work less than optimal if the data are used as test data outside the database system. The percentage of the template matching recognition for testing with test data that has become a database system for each feature extraction is 100% and the average percentage of template matching recognition for testing with test data outside the database system is 67.3% for couples zoning and template matching and 72.2% for couples integral projections and template matching
    corecore