2 research outputs found
The effects of corpus size and homogeneity on language model quality
Generic speech recognition systems typically use language models that are trained to cope with a broad variety of input. However, many recognition applications are more constrained, often to a specific topic or domain. In cases such as these, a knowledge of the particular topic can be used to advantage. This report describes the development of a number of techniques for augmenting domain-specific language models with data from a more general source.
Two investigations are discussed. The first concerns the problem of acquiring a suitable sample of the domain-specific language data from which to train the models. The issue here is essentially one of
quality, since it is shown that not all domain-specific corpora are equal. Moreover, they can display significantly different characteristics that affect the quality of any language models built therefrom. These characteristics are defined using a number of statistical measures, and their significance for language modelling is discussed. The second investigation concerns the empirical development and evaluation of a set of language models for the task of email speech-u>-text dictation. The issue here is essentially one of quantity, since it is shown that effective language models can be built from very modestly sized corpora, providing the training data matches the target appfication. Evaluations show that a language model trained on only 2 million words can perform better than one trained on a corpus of over 100 times that size
An investigation into the use of linguistic context in cursive script recognition by computer
The automatic recognition of hand-written text has been a goal
for over thirty five years. The highly ambiguous nature of cursive
writing (with high variability between not only different writers, but
even between different samples from the same writer), means that
systems based only on visual information are prone to errors.
It is suggested that the application of linguistic knowledge to
the recognition task may improve recognition accuracy. If a low-level
(pattern recognition based) recogniser produces a candidate lattice
(i.e. a directed graph giving a number of alternatives at each word
position in a sentence), then linguistic knowledge can be used to find
the 'best' path through the lattice.
There are many forms of linguistic knowledge that may be used
to this end. This thesis looks specifically at the use of collocation as a
source of linguistic knowledge. Collocation describes the statistical
tendency of certain words to co-occur in a language, within a defined
range. It is suggested that this tendency may be exploited to aid
automatic text recognition.
The construction and use of a post-processing system
incorporating collocational knowledge is described, as are a number
of experiments designed to test the effectiveness of collocation as an
aid to text recognition. The results of these experiments suggest that
collocational statistics may be a useful form of knowledge for this
application and that further research may produce a system of real
practical use