21 research outputs found
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition
Online handwritten Chinese text recognition (OHCTR) is a challenging problem
as it involves a large-scale character set, ambiguous segmentation, and
variable-length input sequences. In this paper, we exploit the outstanding
capability of path signature to translate online pen-tip trajectories into
informative signature feature maps using a sliding window-based method,
successfully capturing the analytic and geometric properties of pen strokes
with strong local invariance and robustness. A multi-spatial-context fully
convolutional recurrent network (MCFCRN) is proposed to exploit the multiple
spatial contexts from the signature feature maps and generate a prediction
sequence while completely avoiding the difficult segmentation problem.
Furthermore, an implicit language model is developed to make predictions based
on semantic context within a predicting feature sequence, providing a new
perspective for incorporating lexicon constraints and prior knowledge about a
certain language in the recognition procedure. Experiments on two standard
benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with
correct rates of 97.10% and 97.15%, respectively, which are significantly
better than the best result reported thus far in the literature.Comment: 14 pages, 9 figure
An End-to-End Approach for Recognition of Modern and Historical Handwritten Numeral Strings
An end-to-end solution for handwritten numeral string recognition is
proposed, in which the numeral string is considered as composed of objects
automatically detected and recognized by a YoLo-based model. The main
contribution of this paper is to avoid heuristic-based methods for string
preprocessing and segmentation, the need for task-oriented classifiers, and
also the use of specific constraints related to the string length. A robust
experimental protocol based on several numeral string datasets, including one
composed of historical documents, has shown that the proposed method is a
feasible end-to-end solution for numeral string recognition. Besides, it
reduces the complexity of the string recognition task considerably since it
drops out classical steps, in special preprocessing, segmentation, and a set of
classifiers devoted to strings with a specific length
Exploiting the Two-Dimensional Nature of Agnostic Music Notation for Neural Optical Music Recognition
State-of-the-art Optical Music Recognition (OMR) techniques follow an end-to-end or holistic approach, i.e., a sole stage for completely processing a single-staff section image and for retrieving the symbols that appear therein. Such recognition systems are characterized by not requiring an exact alignment between each staff and their corresponding labels, hence facilitating the creation and retrieval of labeled corpora. Most commonly, these approaches consider an agnostic music representation, which characterizes music symbols by their shape and height (vertical position in the staff). However, this double nature is ignored since, in the learning process, these two features are treated as a single symbol. This work aims to exploit this trademark that differentiates music notation from other similar domains, such as text, by introducing a novel end-to-end approach to solve the OMR task at a staff-line level. We consider two Convolutional Recurrent Neural Network (CRNN) schemes trained to simultaneously extract the shape and height information and to propose different policies for eventually merging them at the actual neural level. The results obtained for two corpora of monophonic early music manuscripts prove that our proposal significantly decreases the recognition error in figures ranging between 14.4% and 25.6% in the best-case scenarios when compared to the baseline considered.This research work was partially funded by the University of Alicante through project GRE19-04, by the “Programa I+D+i de la Generalitat Valenciana” through grant APOSTD/2020/256, and by the Spanish Ministerio de Universidades through grant FPU19/04957