127 research outputs found
Proceedings of the 2010 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory
On the annual Joint Workshop of the Fraunhofer IOSB and the Karlsruhe Institute of Technology (KIT), Vision and Fusion Laboratory, the students of both institutions present their latest research findings on image processing, visual inspection, pattern recognition, tracking, SLAM, information fusion, non-myopic planning, world modeling, security in surveillance, interoperability, and human-computer interaction. This book is a collection of 16 reviewed technical reports of the 2010 Joint Workshop
A novel approach to handwritten character recognition
A number of new techniques and approaches for off-line handwritten character recognition are presented which individually make significant advancements in the field.
First. an outline-based vectorization algorithm is described which gives improved accuracy in producing vector representations of the pen strokes used to draw characters. Later. Vectorization and other types of preprocessing are criticized and an approach to recognition is suggested which avoids separate preprocessing stages by incorporating them into later stages. Apart from the increased speed of this approach. it allows more effective alteration of the character images since more is known about them at the later stages. It also allows the possibility of alterations being corrected if they are initially detrimental to recognition.
A new feature measurement. the Radial Distance/Sector Area feature. is presented which is highly robust. tolerant to noise. distortion and style variation. and gives high accuracy results when used for training and testing in a statistical or neural classifier. A very powerful classifier is therefore obtained for recognizing correctly segmented characters. The segmentation task is explored in a simple system of integrated over-segmentation. Character classification and approximate dictionary checking. This can be extended to a full system for handprinted word recognition.
In addition to the advancements made by these methods. a powerful new approach to handwritten character recognition is proposed as a direction for future research. This proposal combines the ideas and techniques developed in this thesis in a hierarchical network of classifier modules to achieve context-sensitive. off-line recognition of handwritten text. A new type of "intelligent" feedback is used to direct the search to contextually sensible classifications. A powerful adaptive segmentation system is proposed which. when used as the bottom layer in the hierarchical network. allows initially incorrect segmentations to be adjusted according to the hypotheses of the higher level context modules
A novel approach to handwritten character recognition
A number of new techniques and approaches for off-line handwritten character recognition are presented which individually make significant advancements in the field.
First. an outline-based vectorization algorithm is described which gives improved accuracy in producing vector representations of the pen strokes used to draw characters. Later. Vectorization and other types of preprocessing are criticized and an approach to recognition is suggested which avoids separate preprocessing stages by incorporating them into later stages. Apart from the increased speed of this approach. it allows more effective alteration of the character images since more is known about them at the later stages. It also allows the possibility of alterations being corrected if they are initially detrimental to recognition.
A new feature measurement. the Radial Distance/Sector Area feature. is presented which is highly robust. tolerant to noise. distortion and style variation. and gives high accuracy results when used for training and testing in a statistical or neural classifier. A very powerful classifier is therefore obtained for recognizing correctly segmented characters. The segmentation task is explored in a simple system of integrated over-segmentation. Character classification and approximate dictionary checking. This can be extended to a full system for handprinted word recognition.
In addition to the advancements made by these methods. a powerful new approach to handwritten character recognition is proposed as a direction for future research. This proposal combines the ideas and techniques developed in this thesis in a hierarchical network of classifier modules to achieve context-sensitive. off-line recognition of handwritten text. A new type of "intelligent" feedback is used to direct the search to contextually sensible classifications. A powerful adaptive segmentation system is proposed which. when used as the bottom layer in the hierarchical network. allows initially incorrect segmentations to be adjusted according to the hypotheses of the higher level context modules
Recommended from our members
Process modelling for information system description
My previous experiences and some preliminary studies of the relevant technical literature allowed me to identify several reasons for which the current state of the database theory seemed unsatisfactory and required further research. These reasons included: insufficient formalism of data semantics, misinterpretation of NULL values, inconsistencies in the concept of the universal relation, certain ambiguities in domain definition, and inadequate representation of facts and constraints.
The commonly accepted ’sequentiality’ principle in most of the current system design methodologies imposes strong restrictions on the processes that a target system is composed of. They must be algorithmic and must not be interrupted during execution; neither may they have any parallel subprocesses as their own components. This principle can no longer be considered acceptable. In very many existing systems multiple processors perform many concurrent actions that can interact with each other.
The overconcentration on data models is another disadvantage of the majority of system design methods. Many techniques pay little (or no) attention to process definition. They assume that the model of the Real World consists only of data elements and relationships among them. However, the way the processes are related to each other (in terms of precedence relation) may have considerable impact on the data model.
It has been assumed that the Real World is discretisable, i.e. it may be modelled by a structure of objects. The word object is to be interpreted in a wide sense so it can mean anything within the boundaries of this part of the Real World that is to be represented in the target system. An object may then denote a fact or a physical or abstract entity, or relationships between any of these, or relationships between relationships, or even a still more complex structure.
The fundamental hypothesis was formulated stating the necessity of considering the three aspects of modelling - syntax, semantics and behaviour, and these to be considered integrally.
A syntactic representation of an object within a target system is called a construct A construct which cannot be decomposed further (either syntactically or semantically) is defined to be an atom. Any construct is a result of the following production rules: construct ::= atom I function construct; function ::= atom I construct. This syntax forms a sentential notation.
The sentential notation allows for extensive use of denotational semantics. The meaning of a construct may be defined as a function mapping from a set of syntactic constructs to the appropriate semantic domains; these in turn appear to be sets of functions since a construct may have a meaning in more than one class of objects. Because of its functional form the meaning of a construct may be derived from the meaning of its components.
The issue of system behaviour needed further investigation and a revision of the conventional model of computing. The sequentiality principle has been rejected, concurrency being regarded as a natural property of processes. A postulate has been formulated that any potential parallelism should be constructively used for data/process design and that the process structure would affect the data model. An important distinction has been made between a process declaration - considered as a form of data or an abstraction of knowledge - and a process application that corresponds to a physical action performed by a processor, according to a specific process declaration. In principle, a process may be applied to any construct - including its own representation - and it is a matter of semantics to state whether or not it is sensible to do so. The process application mechanism has been explained in terms of formal systems theory by introducing an abstract machine with two input and two output types of channels.
The system behaviour has been described by defining a process calculus. It is based on logical and functional properties of a discrete time model and provides a means to handle expressions composed of process-variables connected by logical functors. Basic terms of the calculus are: constructs and operations (equivalence, approximation, precedence, incidence, free-parallelism, strict-parallelism). Certain properties of these operations (e.g. associativity or transitivity) allow for handling large expressions. Rules for decomposing/integrating process applications, analogous in some sense to those forming the basis for structured programming, have been derived
The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences
In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
Exploring the adaptive structure of the mental lexicon
The mental lexicon is a complex structure organised in terms of phonology, semantics and syntax, among other levels. In this thesis I propose that this structure can be explained in terms of the pressures acting on it: every aspect
of the organisation of the lexicon is an adaptation ultimately related to the function of language as a tool for human communication, or to the fact that language has to be learned by subsequent generations of people. A collection
of methods, most of which are applied to a Spanish speech corpus, reveal structure at different levels of the lexicon.• The patterns of intra-word distribution of phonological information may be a consequence of pressures for optimal representation of the lexicon in the brain, and of the pressure to facilitate speech segmentation.• An analysis of perceived phonological similarity between words shows that the sharing of different aspects of phonological similarity is related to different functions. Phonological similarity perception sometimes relates to morphology (the stressed final vowel determines verb tense and person) and at other times shows processing biases (similarity in the word initial and final segments is more readily perceived than in word-internal segments).• Another similarity analysis focuses on cooccurrence in speech to create a
representation of the lexicon where the position of a word is determined by the words that tend to occur in its close vicinity. Variations of context-based lexical space naturally categorise words
syntactically and semantically.• A higher level of lexicon structure is revealed by examining the relationships between the phonological and the cooccurrence similarity spaces. A study in Spanish supports the universality of the small but significant correlation between these two spaces found in English by Shillcock, Kirby, McDonald and Brew (2001). This systematicity across levels of representation adds an extra layer of structure that may help lexical acquisition and recognition. I apply it to a new paradigm to determine the function of parameters of
phonological similarity based on their relationships with the syntacticsemantic level. I find that while some aspects of a language's phonology maintain systematicity, others work against it, perhaps
responding to the opposed pressure for word identification.This thesis is an exploratory approach to the study of the mental lexicon structure that uses existing and new methodology to deepen our
understanding of the relationships between language use and language structure
- …