1,105 research outputs found
A hypothesize-and-verify framework for Text Recognition using Deep Recurrent Neural Networks
Deep LSTM is an ideal candidate for text recognition. However text
recognition involves some initial image processing steps like segmentation of
lines and words which can induce error to the recognition system. Without
segmentation, learning very long range context is difficult and becomes
computationally intractable. Therefore, alternative soft decisions are needed
at the pre-processing level. This paper proposes a hybrid text recognizer using
a deep recurrent neural network with multiple layers of abstraction and long
range context along with a language model to verify the performance of the deep
neural network. In this paper we construct a multi-hypotheses tree architecture
with candidate segments of line sequences from different segmentation
algorithms at its different branches. The deep neural network is trained on
perfectly segmented data and tests each of the candidate segments, generating
unicode sequences. In the verification step, these unicode sequences are
validated using a sub-string match with the language model and best first
search is used to find the best possible combination of alternative hypothesis
from the tree structure. Thus the verification framework using language models
eliminates wrong segmentation outputs and filters recognition errors
Recommended from our members
Proposal to Encode the Kaithi Script in ISO/IEC 10646
This is a proposal to encode the Kaithi script in the international character encoding standard Unicode. The script was published in Unicode Standard version 5.2 in October 2009. The script was used for administrative communication from at least the 19c until the early 20c to write Bhojpuri, Magahi, Awadhi, Maithili, Urdu, and other languages related to Hindi. It was also used in religious and literary materials, to record commercial transactions, and in correspondence and personal communication. The Kaithi script was eventually supplanted by Devanagari
uTHCD: A New Benchmarking for Tamil Handwritten OCR
Handwritten character recognition is a challenging research in the field of
document image analysis over many decades due to numerous reasons such as large
writing styles variation, inherent noise in data, expansive applications it
offers, non-availability of benchmark databases etc. There has been
considerable work reported in literature about creation of the database for
several Indic scripts but the Tamil script is still in its infancy as it has
been reported only in one database [5]. In this paper, we present the work done
in the creation of an exhaustive and large unconstrained Tamil Handwritten
Character Database (uTHCD). Database consists of around 91000 samples with
nearly 600 samples in each of 156 classes. The database is a unified collection
of both online and offline samples. Offline samples were collected by asking
volunteers to write samples on a form inside a specified grid. For online
samples, we made the volunteers write in a similar grid using a digital writing
pad. The samples collected encompass a vast variety of writing styles, inherent
distortions arising from offline scanning process viz stroke discontinuity,
variable thickness of stroke, distortion etc. Algorithms which are resilient to
such data can be practically deployed for real time applications. The samples
were generated from around 650 native Tamil volunteers including school going
kids, homemakers, university students and faculty. The isolated character
database will be made publicly available as raw images and Hierarchical Data
File (HDF) compressed file. With this database, we expect to set a new
benchmark in Tamil handwritten character recognition and serve as a launchpad
for many avenues in document image analysis domain. Paper also presents an
ideal experimental set-up using the database on convolutional neural networks
(CNN) with a baseline accuracy of 88% on test data.Comment: 30 pages, 18 figures, in IEEE Acces
IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages
India has a rich linguistic landscape with languages from 4 major language
families spoken by over a billion people. 22 of these languages are listed in
the Constitution of India (referred to as scheduled languages) are the focus of
this work. Given the linguistic diversity, high-quality and accessible Machine
Translation (MT) systems are essential in a country like India. Prior to this
work, there was (i) no parallel training data spanning all the 22 languages,
(ii) no robust benchmarks covering all these languages and containing content
relevant to India, and (iii) no existing translation models which support all
the 22 scheduled languages of India. In this work, we aim to address this gap
by focusing on the missing pieces required for enabling wide, easy, and open
access to good machine translation systems for all 22 scheduled Indian
languages. We identify four key areas of improvement: curating and creating
larger training datasets, creating diverse and high-quality benchmarks,
training multilingual models, and releasing models with open access. Our first
contribution is the release of the Bharat Parallel Corpus Collection (BPCC),
the largest publicly available parallel corpora for Indic languages. BPCC
contains a total of 230M bitext pairs, of which a total of 126M were newly
added, including 644K manually translated sentence pairs created as part of
this work. Our second contribution is the release of the first n-way parallel
benchmark covering all 22 Indian languages, featuring diverse domains,
Indian-origin content, and source-original test sets. Next, we present
IndicTrans2, the first model to support all 22 languages, surpassing existing
models on multiple existing and new benchmarks created as a part of this work.
Lastly, to promote accessibility and collaboration, we release our models and
associated data with permissive licenses at
https://github.com/ai4bharat/IndicTrans2
A Baybayin word recognition system
Baybayin is a pre-Hispanic Philippine writing system used in Luzon island. With the effort in reintroducing the script, in 2018, the Committee on Basic Education and Culture of the Philippine Congress approved House Bill 1022 or the ”National Writing System Act,” which declares the Baybayin script as the Philippines’ national writing system. Since then, Baybayin OCR has become a field of research interest. Numerous works have proposed different techniques in recognizing Baybayin scripts. However, all those studies anchored on the classification and recognition at the character level. In this work, we propose an algorithm that provides the Latin transliteration of a Baybayin word in an image. The proposed system relies on a Baybayin character classifier generated using the Support Vector Machine (SVM). The method involves isolation of each Baybayin character, then classifying each character according to its equivalent syllable in Latin script, and finally concatenate each result to form the transliterated word. The system was tested using a novel dataset of Baybayin word images and achieved a competitive 97.9% recognition accuracy. Based on our review of the literature, this is the first work that recognizes Baybayin scripts at the word level. The proposed system can be used in automated transliterations of Baybayin texts transcribed in old books, tattoos, signage, graphic designs, and documents, among others
- …