3,779 research outputs found
Recommended from our members
Word shape analysis for a hybrid recognition system
This paper describes two wholistic recognizers developed for use in a hybrid recognition system. The recognizers use information about the word shape. This information is strongly related to word zoning. One of the recognizers is explicitly limited by the accuracy of the zoning information extraction. The other recognizer is designed so as to avoid this limitation. The recognizers use very simple sets of features and fuzzy set based pattern matching techniques. This not only aims to increase their robustness, but also causes problems with disambiguation of the results. A verification mechanism, using letter alternatives as compound features, is introduced. Letter alternatives are obtained from a segmentation based recognizer coexisting in the hybrid system. Despite some remaining disambiguation problems, wholistic recognizers are found capable of outperforming the segmentation based recognizer. When working together in a hybrid system, the results are significantly higher than that of the individual recognizers. Recognition results are reported and compared
Cursive script recognition using wildcards and multiple experts
Variability in handwriting styles suggests that many letter recognition engines cannot correctly identify some hand-written letters of poor quality at reasonable computational cost. Methods that are capable of searching the resulting sparse graph of letter candidates are therefore required. The method presented here employs âwildcardsâ to represent missing letter candidates. Multiple experts are used to represent different aspects of handwriting. Each expert evaluates closeness of match and indicates its confidence. Explanation experts determine the degree to which the word alternative under consideration explains extraneous letter candidates. Schemata for normalisation and combination of scores are investigated and their performance compared. Hill climbing yields near-optimal combination weights that outperform comparable methods on identical dynamic handwriting data
Exploiting zoning based on approximating splines in cursive script recognition
Because of its complexity, handwriting recognition has to exploit many sources of information to be successful, e.g. the handwriting zones. Variability of zone-lines, however, requires a more flexible representation than traditional horizontal or linear methods. The proposed method therefore employs approximating cubic splines. Using entire lines of text rather than individual words is shown to improve the zoning accuracy, especially for short words. The new method represents an improvement over existing methods in terms of range of applicability, zone-line precision and zoning-classification accuracy. Application to several problems of handwriting recognition is demonstrated and evaluated
Defining and delineating the central areas of towns for statistical monitoring using continuous surface representations
The increasing availability of very high spatial resolution data using the unit postcode as its geo-reference is making possible new kinds of urban analysis andmodelling. However, at this resolution the granularity of the data used to representurban functions makes it difficult to apply traditional analytical and modellingmethods. An alternative suggested here is to use kernel density estimation totransform these data from point or area 'objects' into continuous surfaces of spatialdensities. The use of this transformation is illustrated by a study in which we attemptto develop a robust, generally applicable methodology for identifying the centralareas of UK towns for the purpose of statistical reporting and comparison.Continuous density transformations from unit post code data relating to a series ofindicators of town centredness created using ArcView are normalised and thensummed to give a composite ?Index of Town Centredness?. Selection of key contourson these index surfaces enables town centres to be delineated. The work results froma study on behalf of DETR
Création automatique de classes de signatures manuscrites pour l'authentification en ligne
International audienceNous nous intéressons dans ce papier à l'optimisation d'un systÚme d'authentification par signature manuscrite. Celui-ci est basé sur une approche Coarse To Fine et utilise l'algorithme Dynamic Time Warping ainsi qu'un seuil de décision global pour accepter ou rejeter un signataire. L'optimisation proposée réside dans l'utilisation d'un algorithme de classification non supervisée afin de déterminer automatiquement des classes de signatures. Pour chacune des classes, un seuil de décision spécifique est établi. Dans ces travaux, nous nous sommes plus particuliÚrement attaché à étudier l'impact de la classification sur les performance. Les résultats expérimentaux sur la base SVC montrent que l'on peut améliorer les performances en diminuant le taux d'erreur égale de 14,4%. Cependant la sensibilité de la classification est trÚs grande et la notion de classe unique pour un signataire semble trop restrictive
A review of finger vein recognition system
Recently, the security-based system using finger vein as a biometric trait has been getting more attention from researchers all over the world, and these researchers have achieved positive progress. Many works have been done in different methods to improve the performance and accuracy of the personal identification and verification results. This paper discusses the previous methods of finger vein recognition system which include three main stages: preprocessing, feature extraction and classification. The advantages and limitations of these previous methods are reviewed at the same time we present the main problems of the finger vein recognition system to make it as a future direction in this field
Similarity of Source Code in the Presence of Pervasive Modifications
Source code analysis to detect code cloning, code plagiarism, and code reuse suffers from the problem of pervasive code modifications, i.e. transformations that may have a global effect. We compare 30 similarity detection techniques and tools against pervasive code modifications. We evaluate the tools using two experimental scenarios for Java source code. These are (1) pervasive modifications created with tools for source code and bytecode obfuscation and (2) source code normalisation through compilation and decompilation using different decompilers. Our experimental results show that highly specialised source code similarity detection techniques and tools can perform better than more general, textual similarity measures. Our study strongly validates the use of compilation/decompilation as a normalisation technique. Its use reduced false classifications to zero for six of the tools. This broad, thorough study is the largest in existence and potentially an invaluable guide for future users of similarity detection in source code
- âŠ